Class PutRecordsRequestEntry
- All Implemented Interfaces:
Serializable
,Cloneable
Represents the output for PutRecords
.
- See Also:
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionclone()
boolean
getData()
The data blob to put into the record, which is base64-encoded when the blob is serialized.The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.Determines which shard in the stream the data record is assigned to.int
hashCode()
void
setData
(ByteBuffer data) The data blob to put into the record, which is base64-encoded when the blob is serialized.void
setExplicitHashKey
(String explicitHashKey) The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.void
setPartitionKey
(String partitionKey) Determines which shard in the stream the data record is assigned to.toString()
Returns a string representation of this object; useful for testing and debugging.withData
(ByteBuffer data) The data blob to put into the record, which is base64-encoded when the blob is serialized.withExplicitHashKey
(String explicitHashKey) The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.withPartitionKey
(String partitionKey) Determines which shard in the stream the data record is assigned to.
-
Constructor Details
-
PutRecordsRequestEntry
public PutRecordsRequestEntry()
-
-
Method Details
-
setData
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
AWS SDK for Java performs a Base64 encoding on this field before sending this request to AWS service by default. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
- Parameters:
data
- The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
-
getData
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
ByteBuffer
s are stateful. Calling theirget
methods changes theirposition
. We recommend usingByteBuffer.asReadOnlyBuffer()
to create a read-only view of the buffer with an independentposition
, and callingget
methods on this rather than directly on the returnedByteBuffer
. Doing so will ensure that anyone else using theByteBuffer
will not be affected by changes to theposition
.- Returns:
- The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
-
withData
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
- Parameters:
data
- The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setExplicitHashKey
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
- Parameters:
explicitHashKey
- The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
-
getExplicitHashKey
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
- Returns:
- The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
-
withExplicitHashKey
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
- Parameters:
explicitHashKey
- The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
setPartitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
- Parameters:
partitionKey
- Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
-
getPartitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
- Returns:
- Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
-
withPartitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
- Parameters:
partitionKey
- Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
toString
Returns a string representation of this object; useful for testing and debugging. -
equals
-
hashCode
public int hashCode() -
clone
-