helion.language.Tile#
- class helion.language.Tile(block_id)[source]#
This class should not be instantiated directly, it is the result of hl.tile(…) and represents a single tile of the iteration space.
Tile’s can be used as indices to tensors, e.g. tensor[tile]. Tile’s can also be use as sizes for allocations, e.g. torch.empty([tile]). There are also properties such as *
tile.index*tile.begin*tile.end*tile.id*tile.block_size*tile.countthat can be used to retrieve various information about the tile.Masking is implicit for tiles, so if the final tile is smaller than the block size loading that tile will only load the valid elements and reduction operations know to ignore the invalid elements.
- Parameters:
block_id (
int) –
Methods
__init__(block_id)- param block_id:
abs()See
torch.abs()abs_()In-place version of
abs()absolute()Alias for
abs()absolute_()In-place version of
absolute()Alias forabs_()acos()See
torch.acos()acos_()In-place version of
acos()acosh()See
torch.acosh()acosh_()In-place version of
acosh()add(other, *[, alpha])Add a scalar or tensor to
selftensor.add_(other, *[, alpha])In-place version of
add()addbmm(batch1, batch2, *[, beta, alpha])See
torch.addbmm()addbmm_(batch1, batch2, *[, beta, alpha])In-place version of
addbmm()addcdiv(tensor1, tensor2, *[, value])See
torch.addcdiv()addcdiv_(tensor1, tensor2, *[, value])In-place version of
addcdiv()addcmul(tensor1, tensor2, *[, value])See
torch.addcmul()addcmul_(tensor1, tensor2, *[, value])In-place version of
addcmul()addmm(mat1, mat2, *[, beta, alpha])See
torch.addmm()addmm_(mat1, mat2, *[, beta, alpha])In-place version of
addmm()addmv(mat, vec, *[, beta, alpha])See
torch.addmv()addmv_(mat, vec, *[, beta, alpha])In-place version of
addmv()addr(vec1, vec2, *[, beta, alpha])See
torch.addr()addr_(vec1, vec2, *[, beta, alpha])In-place version of
addr()adjoint()Alias for
adjoint()align_as(other)Permutes the dimensions of the
selftensor to match the dimension order in theothertensor, adding size-one dims for any new names.align_to(*names)Permutes the dimensions of the
selftensor to match the order specified innames, adding size-one dims for any new names.all([dim, keepdim])See
torch.all()allclose(other[, rtol, atol, equal_nan])See
torch.allclose()amax([dim, keepdim])See
torch.amax()amin([dim, keepdim])See
torch.amin()aminmax(*[, dim, keepdim])See
torch.aminmax()angle()See
torch.angle()any([dim, keepdim])See
torch.any()apply_(callable)Applies the function
callableto each element in the tensor, replacing each element with the value returned bycallable.arccos()See
torch.arccos()arccos_()In-place version of
arccos()arccoshacosh() -> Tensor
arccosh_acosh_() -> Tensor
arcsin()See
torch.arcsin()arcsin_()In-place version of
arcsin()arcsinh()See
torch.arcsinh()arcsinh_()In-place version of
arcsinh()arctan()See
torch.arctan()arctan2(other)See
torch.arctan2()arctan2_atan2_(other) -> Tensor
arctan_()In-place version of
arctan()arctanh()See
torch.arctanh()arctanh_(other)In-place version of
arctanh()argmax([dim, keepdim])See
torch.argmax()argmin([dim, keepdim])See
torch.argmin()argsort([dim, descending])See
torch.argsort()argwhere()See
torch.argwhere()as_strided(size, stride[, storage_offset])as_strided_(size, stride[, storage_offset])In-place version of
as_strided()as_strided_scatter(src, size, stride[, ...])See
torch.as_strided_scatter()as_subclass(cls)Makes a
clsinstance with the same data pointer asself.asin()See
torch.asin()asin_()In-place version of
asin()asinh()See
torch.asinh()asinh_()In-place version of
asinh()atan()See
torch.atan()atan2(other)See
torch.atan2()atan2_(other)In-place version of
atan2()atan_()In-place version of
atan()atanh()See
torch.atanh()atanh_(other)In-place version of
atanh()backward([gradient, retain_graph, ...])Computes the gradient of current tensor wrt graph leaves.
baddbmm(batch1, batch2, *[, beta, alpha])See
torch.baddbmm()baddbmm_(batch1, batch2, *[, beta, alpha])In-place version of
baddbmm()bernoulli(*[, generator])Returns a result tensor where each \(\texttt{result[i]}\) is independently sampled from \(\text{Bernoulli}(\texttt{self[i]})\).
bernoulli_([p, generator])Fills each location of
selfwith an independent sample from \(\text{Bernoulli}(\texttt{p})\).bfloat16([memory_format])self.bfloat16()is equivalent toself.to(torch.bfloat16).bincount([weights, minlength])See
torch.bincount()bitwise_and()bitwise_and_()In-place version of
bitwise_and()bitwise_left_shift(other)bitwise_left_shift_(other)In-place version of
bitwise_left_shift()bitwise_not()bitwise_not_()In-place version of
bitwise_not()bitwise_or()bitwise_or_()In-place version of
bitwise_or()bitwise_right_shift(other)bitwise_right_shift_(other)In-place version of
bitwise_right_shift()bitwise_xor()bitwise_xor_()In-place version of
bitwise_xor()bmm(batch2)See
torch.bmm()bool([memory_format])self.bool()is equivalent toself.to(torch.bool).broadcast_to(shape)See
torch.broadcast_to().byte([memory_format])self.byte()is equivalent toself.to(torch.uint8).cauchy_([median, sigma, generator])Fills the tensor with numbers drawn from the Cauchy distribution:
ccol_indicescdouble([memory_format])self.cdouble()is equivalent toself.to(torch.complex128).ceil()See
torch.ceil()ceil_()In-place version of
ceil()cfloat([memory_format])self.cfloat()is equivalent toself.to(torch.complex64).chalf([memory_format])self.chalf()is equivalent toself.to(torch.complex32).char([memory_format])self.char()is equivalent toself.to(torch.int8).cholesky([upper])See
torch.cholesky()cholesky_inverse([upper])cholesky_solve(input2[, upper])chunk(chunks[, dim])See
torch.chunk()clamp([min, max])See
torch.clamp()clamp_([min, max])In-place version of
clamp()clamp_maxclamp_max_clamp_minclamp_min_clip([min, max])Alias for
clamp().clip_([min, max])Alias for
clamp_().clone(*[, memory_format])See
torch.clone()coalesce()Returns a coalesced copy of
selfifselfis an uncoalesced tensor.col_indices()Returns the tensor containing the column indices of the
selftensor whenselfis a sparse CSR tensor of layoutsparse_csr.conj()See
torch.conj()conj_physical()conj_physical_()In-place version of
conj_physical()contiguous([memory_format])Returns a contiguous in memory tensor containing the same data as
selftensor.copy_(src[, non_blocking])Copies the elements from
srcintoselftensor and returnsself.copysign(other)See
torch.copysign()copysign_(other)In-place version of
copysign()corrcoef()See
torch.corrcoef()cos()See
torch.cos()cos_()In-place version of
cos()cosh()See
torch.cosh()cosh_()In-place version of
cosh()count_nonzero([dim])cov(*[, correction, fweights, aweights])See
torch.cov()cpu([memory_format])Returns a copy of this object in CPU memory.
cross(other[, dim])See
torch.cross()crow_indices()Returns the tensor containing the compressed row indices of the
selftensor whenselfis a sparse CSR tensor of layoutsparse_csr.cuda([device, non_blocking, memory_format])Returns a copy of this object in CUDA memory.
cummax(dim)See
torch.cummax()cummin(dim)See
torch.cummin()cumprod(dim[, dtype])See
torch.cumprod()cumprod_(dim[, dtype])In-place version of
cumprod()cumsum(dim[, dtype])See
torch.cumsum()cumsum_(dim[, dtype])In-place version of
cumsum()data_ptr()Returns the address of the first element of
selftensor.deg2rad()See
torch.deg2rad()deg2rad_()In-place version of
deg2rad()dense_dim()Return the number of dense dimensions in a sparse tensor
self.dequantize()Given a quantized Tensor, dequantize it and return the dequantized float Tensor.
det()See
torch.det()detachReturns a new Tensor, detached from the current graph.
detach_Detaches the Tensor from the graph that created it, making it a leaf.
diag([diagonal])See
torch.diag()diag_embed([offset, dim1, dim2])diagflat([offset])See
torch.diagflat()diagonal([offset, dim1, dim2])See
torch.diagonal()diagonal_scatter(src[, offset, dim1, dim2])diff([n, dim, prepend, append])See
torch.diff()digamma()See
torch.digamma()digamma_()In-place version of
digamma()dim()Returns the number of dimensions of
selftensor.dim_order(*[, ambiguity_check])Returns the uniquely determined tuple of int describing the dim order or physical layout of
self.dist(other[, p])See
torch.dist()div(value, *[, rounding_mode])See
torch.div()div_(value, *[, rounding_mode])In-place version of
div()divide(value, *[, rounding_mode])See
torch.divide()divide_(value, *[, rounding_mode])In-place version of
divide()dot(other)See
torch.dot()double([memory_format])self.double()is equivalent toself.to(torch.float64).dsplit(split_size_or_sections)See
torch.dsplit()eig([eigenvectors])element_size()Returns the size in bytes of an individual element.
eq(other)See
torch.eq()eq_(other)In-place version of
eq()equal(other)See
torch.equal()erf()See
torch.erf()erf_()In-place version of
erf()erfc()See
torch.erfc()erfc_()In-place version of
erfc()erfinv()See
torch.erfinv()erfinv_()In-place version of
erfinv()exp()See
torch.exp()exp2()See
torch.exp2()exp2_()In-place version of
exp2()exp_()In-place version of
exp()expand(*sizes)Returns a new view of the
selftensor with singleton dimensions expanded to a larger size.expand_as(other)Expand this tensor to the same size as
other.expm1()See
torch.expm1()expm1_()In-place version of
expm1()exponential_([lambd, generator])Fills
selftensor with elements drawn from the PDF (probability density function):fill_(value)Fills
selftensor with the specified value.fill_diagonal_(fill_value[, wrap])Fill the main diagonal of a tensor that has at least 2-dimensions.
fix()See
torch.fix().fix_()In-place version of
fix()flatten([start_dim, end_dim])See
torch.flatten()flip(dims)See
torch.flip()fliplr()See
torch.fliplr()flipud()See
torch.flipud()float([memory_format])self.float()is equivalent toself.to(torch.float32).float_power(exponent)float_power_(exponent)In-place version of
float_power()floor()See
torch.floor()floor_()In-place version of
floor()floor_divide(value)floor_divide_(value)In-place version of
floor_divide()fmax(other)See
torch.fmax()fmin(other)See
torch.fmin()fmod(divisor)See
torch.fmod()fmod_(divisor)In-place version of
fmod()frac()See
torch.frac()frac_()In-place version of
frac()frexp(input)See
torch.frexp()gather(dim, index)See
torch.gather()gcd(other)See
torch.gcd()gcd_(other)In-place version of
gcd()ge(other)See
torch.ge().ge_(other)In-place version of
ge().geometric_(p, *[, generator])Fills
selftensor with elements drawn from the geometric distribution:geqrf()See
torch.geqrf()ger(vec2)See
torch.ger()get_device()For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides.
greater(other)See
torch.greater().greater_(other)In-place version of
greater().greater_equal(other)greater_equal_(other)In-place version of
greater_equal().gt(other)See
torch.gt().gt_(other)In-place version of
gt().half([memory_format])self.half()is equivalent toself.to(torch.float16).hardshrink([lambd])has_namesIs
Trueif any of this tensor's dimensions are named.hash_tensorheaviside(values)heaviside_(values)In-place version of
heaviside()histc([bins, min, max])See
torch.histc()histogram(input, bins, *[, range, weight, ...])hsplit(split_size_or_sections)See
torch.hsplit()hypot(other)See
torch.hypot()hypot_(other)In-place version of
hypot()i0()See
torch.i0()i0_()In-place version of
i0()igamma(other)See
torch.igamma()igamma_(other)In-place version of
igamma()igammac(other)See
torch.igammac()igammac_(other)In-place version of
igammac()index_add(dim, index, source, *[, alpha])Out-of-place version of
torch.Tensor.index_add_().index_add_(dim, index, source, *[, alpha])Accumulate the elements of
alphatimessourceinto theselftensor by adding to the indices in the order given inindex.index_copy(dim, index, tensor2)Out-of-place version of
torch.Tensor.index_copy_().index_copy_(dim, index, tensor)Copies the elements of
tensorinto theselftensor by selecting the indices in the order given inindex.index_fill(dim, index, value)Out-of-place version of
torch.Tensor.index_fill_().index_fill_(dim, index, value)Fills the elements of the
selftensor with valuevalueby selecting the indices in the order given inindex.index_put(indices, values[, accumulate])Out-place version of
index_put_().index_put_(indices, values[, accumulate])Puts values from the tensor
valuesinto the tensorselfusing the indices specified inindices(which is a tuple of Tensors).index_reduceindex_reduce_(dim, index, source, reduce, *)Accumulate the elements of
sourceinto theselftensor by accumulating to the indices in the order given inindexusing the reduction given by thereduceargument.index_select(dim, index)indices()Return the indices tensor of a sparse COO tensor.
inner(other)See
torch.inner().int([memory_format])self.int()is equivalent toself.to(torch.int32).int_repr()Given a quantized Tensor,
self.int_repr()returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.inverse()See
torch.inverse()ipu([device, non_blocking, memory_format])Returns a copy of this object in IPU memory.
is_coalesced()Returns
Trueifselfis a sparse COO tensor that is coalesced,Falseotherwise.is_complex()Returns True if the data type of
selfis a complex data type.is_conj()Returns True if the conjugate bit of
selfis set to true.is_contiguous([memory_format])Returns True if
selftensor is contiguous in memory in the order specified by memory format.is_distributedis_floating_point()Returns True if the data type of
selfis a floating point data type.is_inference()See
torch.is_inference()is_neg()Returns True if the negative bit of
selfis set to true.is_nonzerois_pinnedReturns true if this tensor resides in pinned memory.
is_same_sizeis_set_to(tensor)Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride).
is_shared()Checks if tensor is in shared memory.
is_signed()Returns True if the data type of
selfis a signed data type.isclose(other[, rtol, atol, equal_nan])See
torch.isclose()isfinite()See
torch.isfinite()isinf()See
torch.isinf()isnan()See
torch.isnan()isneginf()See
torch.isneginf()isposinf()See
torch.isposinf()isreal()See
torch.isreal()istft(n_fft[, hop_length, win_length, ...])See
torch.istft()item()Returns the value of this tensor as a standard Python number.
kron(other)See
torch.kron()kthvalue(k[, dim, keepdim])See
torch.kthvalue()lcm(other)See
torch.lcm()lcm_(other)In-place version of
lcm()ldexp(other)See
torch.ldexp()ldexp_(other)In-place version of
ldexp()le(other)See
torch.le().le_(other)In-place version of
le().lerp(end, weight)See
torch.lerp()lerp_(end, weight)In-place version of
lerp()lesslt(other) -> Tensor
less_(other)In-place version of
less().less_equal(other)See
torch.less_equal().less_equal_(other)In-place version of
less_equal().lgamma()See
torch.lgamma()lgamma_()In-place version of
lgamma()log()See
torch.log()log10()See
torch.log10()log10_()In-place version of
log10()log1p()See
torch.log1p()log1p_()In-place version of
log1p()log2()See
torch.log2()log2_()In-place version of
log2()log_()In-place version of
log()log_normal_([mean, std, generator])Fills
selftensor with numbers samples from the log-normal distribution parameterized by the given mean \(\mu\) and standard deviation \(\sigma\).log_softmaxlogaddexp(other)logaddexp2(other)logcumsumexp(dim)logdet()See
torch.logdet()logical_and()logical_and_()In-place version of
logical_and()logical_not()logical_not_()In-place version of
logical_not()logical_or()logical_or_()In-place version of
logical_or()logical_xor()logical_xor_()In-place version of
logical_xor()logit()See
torch.logit()logit_()In-place version of
logit()logsumexp(dim[, keepdim])long([memory_format])self.long()is equivalent toself.to(torch.int64).lstsq(other)lt(other)See
torch.lt().lt_(other)In-place version of
lt().lu([pivot, get_infos])See
torch.lu()lu_solve(LU_data, LU_pivots)See
torch.lu_solve()map2_map_(tensor, callable)Applies
callablefor each element inselftensor and the giventensorand stores the results inselftensor.masked_fill(mask, value)Out-of-place version of
torch.Tensor.masked_fill_()masked_fill_(mask, value)Fills elements of
selftensor withvaluewheremaskis True.masked_scatter(mask, tensor)Out-of-place version of
torch.Tensor.masked_scatter_()masked_scatter_(mask, source)Copies elements from
sourceintoselftensor at positions where themaskis True.masked_select(mask)matmul(tensor2)See
torch.matmul()matrix_exp()matrix_power(n)Note
matrix_power()is deprecated, usetorch.linalg.matrix_power()instead.max([dim, keepdim])See
torch.max()maximum(other)See
torch.maximum()mean([dim, keepdim, dtype])See
torch.mean()median([dim, keepdim])See
torch.median()min([dim, keepdim])See
torch.min()minimum(other)See
torch.minimum()mm(mat2)See
torch.mm()mode([dim, keepdim])See
torch.mode()module_load(other[, assign])Defines how to transform
otherwhen loading it intoselfinload_state_dict().moveaxis(source, destination)See
torch.moveaxis()movedim(source, destination)See
torch.movedim()msort()See
torch.msort()mtia([device, non_blocking, memory_format])Returns a copy of this object in MTIA memory.
mul(value)See
torch.mul().mul_(value)In-place version of
mul().multinomial(num_samples[, replacement, ...])multiply(value)See
torch.multiply().multiply_(value)In-place version of
multiply().mv(vec)See
torch.mv()mvlgamma(p)See
torch.mvlgamma()mvlgamma_(p)In-place version of
mvlgamma()nan_to_num([nan, posinf, neginf])See
torch.nan_to_num().nan_to_num_([nan, posinf, neginf])In-place version of
nan_to_num().nanmean([dim, keepdim, dtype])See
torch.nanmean()nanmedian([dim, keepdim])nanquantile(q[, dim, keepdim, interpolation])nansum([dim, keepdim, dtype])See
torch.nansum()narrow(dimension, start, length)See
torch.narrow().narrow_copy(dimension, start, length)See
torch.narrow_copy().ndimension()Alias for
dim()ne(other)See
torch.ne().ne_(other)In-place version of
ne().neg()See
torch.neg()neg_()In-place version of
neg()negative()See
torch.negative()negative_()In-place version of
negative()nelement()Alias for
numel()newnew_empty(size, *[, dtype, device, ...])Returns a Tensor of size
sizefilled with uninitialized data.new_empty_strided(size, stride[, dtype, ...])Returns a Tensor of size
sizeand stridesstridefilled with uninitialized data.new_full(size, fill_value, *[, dtype, ...])Returns a Tensor of size
sizefilled withfill_value.new_ones(size, *[, dtype, device, ...])Returns a Tensor of size
sizefilled with1.new_tensor(data, *[, dtype, device, ...])Returns a new Tensor with
dataas the tensor data.new_zeros(size, *[, dtype, device, ...])Returns a Tensor of size
sizefilled with0.nextafter(other)nextafter_(other)In-place version of
nextafter()nonzero()See
torch.nonzero()nonzero_static(input, *, size[, fill_value])Returns a 2-D tensor where each row is the index for a non-zero value.
norm([p, dim, keepdim, dtype])See
torch.norm()normal_([mean, std, generator])Fills
selftensor with elements samples from the normal distribution parameterized bymeanandstd.not_equal(other)See
torch.not_equal().not_equal_(other)In-place version of
not_equal().numel()See
torch.numel()numpy(*[, force])Returns the tensor as a NumPy
ndarray.orgqr(input2)See
torch.orgqr()ormqr(input2, input3[, left, transpose])See
torch.ormqr()outer(vec2)See
torch.outer().permute(*dims)See
torch.permute()pin_memory()Copies the tensor to pinned memory, if it's not already pinned.
pinverse()See
torch.pinverse()polygamma(n)polygamma_(n)In-place version of
polygamma()positive()See
torch.positive()pow(exponent)See
torch.pow()pow_(exponent)In-place version of
pow()preluprod([dim, keepdim, dtype])See
torch.prod()put(input, index, source[, accumulate])Out-of-place version of
torch.Tensor.put_().put_(index, source[, accumulate])Copies the elements from
sourceinto the positions specified byindex.q_per_channel_axis()Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied.
q_per_channel_scales()Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer.
q_per_channel_zero_points()Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer.
q_scale()Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().
q_zero_point()Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().
qr([some])See
torch.qr()qscheme()Returns the quantization scheme of a given QTensor.
quantile(q[, dim, keepdim, interpolation])See
torch.quantile()rad2deg()See
torch.rad2deg()rad2deg_()In-place version of
rad2deg()random_([from, to, generator])Fills
selftensor with numbers sampled from the discrete uniform distribution over[from, to - 1].ravel()see
torch.ravel()reciprocal()reciprocal_()In-place version of
reciprocal()record_stream(stream)Marks the tensor as having been used by this stream.
refine_names(*names)Refines the dimension names of
selfaccording tonames.register_hook(hook)Registers a backward hook.
register_post_accumulate_grad_hook(hook)Registers a backward hook that runs after grad accumulation.
reinforce(reward)relurelu_remainder(divisor)remainder_(divisor)In-place version of
remainder()rename(*names, **rename_map)Renames dimension names of
self.rename_(*names, **rename_map)In-place version of
rename().renorm(p, dim, maxnorm)See
torch.renorm()renorm_(p, dim, maxnorm)In-place version of
renorm()repeat(*repeats)Repeats this tensor along the specified dimensions.
repeat_interleave(repeats[, dim, output_size])requires_grad_([requires_grad])Change if autograd should record operations on this tensor: sets this tensor's
requires_gradattribute in-place.reshape(*shape)Returns a tensor with the same data and number of elements as
selfbut with the specified shape.reshape_as(other)Returns this tensor as the same shape as
other.resize(*sizes)resize_(*sizes[, memory_format])Resizes
selftensor to the specified size.resize_as(tensor)resize_as_(tensor[, memory_format])Resizes the
selftensor to be the same size as the specifiedtensor.resize_as_sparse_resolve_conj()resolve_neg()retain_grad()Enables this Tensor to have their
gradpopulated duringbackward().roll(shifts, dims)See
torch.roll()rot90(k, dims)See
torch.rot90()round([decimals])See
torch.round()round_([decimals])In-place version of
round()row_indicesrsqrt()See
torch.rsqrt()rsqrt_()In-place version of
rsqrt()scatter(dim, index, src)Out-of-place version of
torch.Tensor.scatter_()scatter_(dim, index, src, *[, reduce])Writes all values from the tensor
srcintoselfat the indices specified in theindextensor.scatter_add(dim, index, src)Out-of-place version of
torch.Tensor.scatter_add_()scatter_add_(dim, index, src)Adds all values from the tensor
srcintoselfat the indices specified in theindextensor in a similar fashion asscatter_().scatter_reduce(dim, index, src, reduce, *[, ...])Out-of-place version of
torch.Tensor.scatter_reduce_()scatter_reduce_(dim, index, src, reduce, *)Reduces all values from the
srctensor to the indices specified in theindextensor in theselftensor using the applied reduction defined via thereduceargument ("sum","prod","mean","amax","amin").select(dim, index)See
torch.select()select_scatter(src, dim, index)set_([source, storage_offset, size, stride])Sets the underlying storage, size, and strides.
sgn()See
torch.sgn()sgn_()In-place version of
sgn()share_memory_()Moves the underlying storage to shared memory.
short([memory_format])self.short()is equivalent toself.to(torch.int16).sigmoid()See
torch.sigmoid()sigmoid_()In-place version of
sigmoid()sign()See
torch.sign()sign_()In-place version of
sign()signbit()See
torch.signbit()sin()See
torch.sin()sin_()In-place version of
sin()sinc()See
torch.sinc()sinc_()In-place version of
sinc()sinh()See
torch.sinh()sinh_()In-place version of
sinh()size([dim])Returns the size of the
selftensor.slice_inverseslice_scatter(src[, dim, start, end, step])slogdet()See
torch.slogdet()smm(mat)See
torch.smm()softmax(dim)Alias for
torch.nn.functional.softmax().solve(other)sort([dim, descending])See
torch.sort()sparse_dim()Return the number of sparse dimensions in a sparse tensor
self.sparse_mask(mask)Returns a new sparse tensor with values from a strided tensor
selffiltered by the indices of the sparse tensormask.sparse_resize_(size, sparse_dim, dense_dim)Resizes
selfsparse tensor to the desired size and the number of sparse and dense dimensions.sparse_resize_and_clear_(size, sparse_dim, ...)Removes all specified elements from a sparse tensor
selfand resizesselfto the desired size and the number of sparse and dense dimensions.split(split_size[, dim])See
torch.split()split_with_sizessqrt()See
torch.sqrt()sqrt_()In-place version of
sqrt()square()See
torch.square()square_()In-place version of
square()squeeze([dim])See
torch.squeeze()squeeze_([dim])In-place version of
squeeze()sspaddmm(mat1, mat2, *[, beta, alpha])See
torch.sspaddmm()std([dim, correction, keepdim])See
torch.std()stft(n_fft[, hop_length, win_length, ...])See
torch.stft()storage()Returns the underlying
TypedStorage.storage_offset()Returns
selftensor's offset in the underlying storage in terms of number of storage elements (not bytes).storage_type()Returns the type of the underlying storage.
stride(dim)Returns the stride of
selftensor.sub(other, *[, alpha])See
torch.sub().sub_(other, *[, alpha])In-place version of
sub()subtract(other, *[, alpha])See
torch.subtract().subtract_(other, *[, alpha])In-place version of
subtract().sum([dim, keepdim, dtype])See
torch.sum()sum_to_size(*size)Sum
thistensor tosize.svd([some, compute_uv])See
torch.svd()swapaxes(axis0, axis1)See
torch.swapaxes()swapaxes_(axis0, axis1)In-place version of
swapaxes()swapdims(dim0, dim1)See
torch.swapdims()swapdims_(dim0, dim1)In-place version of
swapdims()symeig([eigenvectors])t()See
torch.t()t_()In-place version of
t()take(indices)See
torch.take()take_along_dim(indices, dim)tan()See
torch.tan()tan_()In-place version of
tan()tanh()See
torch.tanh()tanh_()In-place version of
tanh()tensor_split(indices_or_sections[, dim])tile(dims)See
torch.tile()to(*args, **kwargs)Performs Tensor dtype and/or device conversion.
to_dense([dtype, masked_grad])Creates a strided copy of
selfifselfis not a strided tensor, otherwise returnsself.to_mkldnn()Returns a copy of the tensor in
torch.mkldnnlayout.to_padded_tensor(padding[, output_size])See
to_padded_tensor()to_sparse(sparseDims)Returns a sparse copy of the tensor.
to_sparse_bsc(blocksize, dense_dim)Convert a tensor to a block sparse column (BSC) storage format of given blocksize.
to_sparse_bsr(blocksize, dense_dim)Convert a tensor to a block sparse row (BSR) storage format of given blocksize.
to_sparse_coo()Convert a tensor to coordinate format.
to_sparse_csc()Convert a tensor to compressed column storage (CSC) format.
to_sparse_csr([dense_dim])Convert a tensor to compressed row storage format (CSR).
tolist()Returns the tensor as a (nested) list.
topk(k[, dim, largest, sorted])See
torch.topk()trace()See
torch.trace()transpose(dim0, dim1)transpose_(dim0, dim1)In-place version of
transpose()triangular_solve(A[, upper, transpose, ...])tril([diagonal])See
torch.tril()tril_([diagonal])In-place version of
tril()triu([diagonal])See
torch.triu()triu_([diagonal])In-place version of
triu()true_divide(value)true_divide_(value)In-place version of
true_divide_()trunc()See
torch.trunc()trunc_()In-place version of
trunc()type([dtype, non_blocking])Returns the type if dtype is not provided, else casts this object to the specified type.
type_as(tensor)Returns this tensor cast to the type of the given tensor.
unbind([dim])See
torch.unbind()unflatten(dim, sizes)See
torch.unflatten().unfold(dimension, size, step)Returns a view of the original tensor which contains all slices of size
sizefromselftensor in the dimensiondimension.uniform_([from, to, generator])Fills
selftensor with numbers sampled from the continuous uniform distribution:unique([sorted, return_inverse, ...])Returns the unique elements of the input tensor.
unique_consecutive([return_inverse, ...])Eliminates all but the first element from every consecutive group of equivalent elements.
unsafe_chunk(chunks[, dim])See
torch.unsafe_chunk()unsafe_split(split_size[, dim])See
torch.unsafe_split()unsafe_split_with_sizesunsqueeze(dim)unsqueeze_(dim)In-place version of
unsqueeze()untyped_storage()Returns the underlying
UntypedStorage.values()Return the values tensor of a sparse COO tensor.
var([dim, correction, keepdim])See
torch.var()vdot(other)See
torch.vdot()view(*shape)Returns a new tensor with the same data as the
selftensor but of a differentshape.view_as(other)View this tensor as the same size as
other.vsplit(split_size_or_sections)See
torch.vsplit()where(condition, y)self.where(condition, y)is equivalent totorch.where(condition, self, y).xlogy(other)See
torch.xlogy()xlogy_(other)In-place version of
xlogy()xpu([device, non_blocking, memory_format])Returns a copy of this object in XPU memory.
zero_()Fills
selftensor with zeros.Attributes
HReturns a view of a matrix (2-D tensor) conjugated and transposed.
TReturns a view of this tensor with its dimensions reversed.
beginAlias for
tile_begin(), which retrieves the start offset of a tile.block_sizeAlias for
tile_block_size(), which retrieves the block_size of a tile.countAlias for
tile_count(), which retrieves the number of tiles.datadeviceIs the
torch.devicewhere this Tensor is.dtypeendAlias for
tile_end(), which retrieves the end offset of a tile.gradThis attribute is
Noneby default and becomes a Tensor the first time a call tobackward()computes gradients forself.grad_fnidAlias for
tile_id(), which retrieves the id of a tile.imagReturns a new tensor containing imaginary values of the
selftensor.indexAlias for
tile_index(), which retrieves a tensor containing the offsets for a tile.is_cpuIs
Trueif the Tensor is stored on the CPU,Falseotherwise.is_cudaIs
Trueif the Tensor is stored on the GPU,Falseotherwise.is_ipuIs
Trueif the Tensor is stored on the IPU,Falseotherwise.is_leafAll Tensors that have
requires_gradwhich isFalsewill be leaf Tensors by convention.is_maiais_metaIs
Trueif the Tensor is a meta tensor,Falseotherwise.is_mkldnnis_mpsIs
Trueif the Tensor is stored on the MPS device,Falseotherwise.is_mtiais_nestedis_quantizedIs
Trueif the Tensor is quantized,Falseotherwise.is_sparseIs
Trueif the Tensor uses sparse COO storage layout,Falseotherwise.is_sparse_csrIs
Trueif the Tensor uses sparse CSR storage layout,Falseotherwise.is_vulkanis_xlaIs
Trueif the Tensor is stored on an XLA device,Falseotherwise.is_xpuIs
Trueif the Tensor is stored on the XPU,Falseotherwise.itemsizeAlias for
element_size()layoutmHAccessing this property is equivalent to calling
adjoint().mTReturns a view of this tensor with the last two dimensions transposed.
namenamesStores names for each of this tensor's dimensions.
nbytesReturns the number of bytes consumed by the "view" of elements of the Tensor if the Tensor does not use sparse storage layout.
ndimAlias for
dim()output_nrrealReturns a new tensor containing real values of the
selftensor for a complex-valued input tensor.requires_gradIs
Trueif gradients need to be computed for this Tensor,Falseotherwise.retains_gradIs
Trueif this Tensor is non-leaf and itsgradis enabled to be populated duringbackward(),Falseotherwise.shapeReturns the size of the
selftensor.volatile