PositionalEncoding

public struct PositionalEncoding<Element, Device> : LayerType, Codable where Element : RandomizableType, Device : DeviceType

Positional Encoding layer using the encoding method proposed in Attention Is All You Need.

The layer takes an array of Ints as an input, which indicate the number of elements in each sequence of the minibatch. It returns a tensor with the shape [max(inputs), hiddenSize]. It does not mask out positional encodings for padding elements.

  • Declaration

    Swift

    public typealias Parameter = Element
  • Number of elements in the positional encoding output tensor.

    Declaration

    Swift

    public let hiddenSize: Int
  • Creates a Positional Encoding layer using the encoding method proposed in Attention Is All You Need.

    Declaration

    Swift

    public init(hiddenSize: Int)

    Parameters

    hiddenSize

    Number of elements in the positional encoding output tensor.

  • Declaration

    Swift

    public var parameters: [Tensor<Element, Device>] { get }
  • Declaration

    Swift

    public var parameterPaths: [WritableKeyPath<`Self`, Tensor<Element, Device>>] { get }
  • Creates a positional encoding matrix

    Declaration

    Swift

    public func callAsFunction(_ maxLen: Int) -> Tensor<Element, Device>

    Parameters

    maxLen

    Maximum sequence length in the current minibatch

    Return Value

    Tensor of shape [max(inputs), hiddenSize]