Hazelcast Python Client¶
Hazelcast is an open-source distributed in-memory data store and computation platform that provides a wide variety of distributed data structures and concurrency primitives.
Hazelcast Python client is a way to communicate to Hazelcast clusters and access the cluster data. The client provides a Future-based asynchronous API suitable for wide ranges of use cases.
Overview¶
Usage¶
import hazelcast
# Connect to Hazelcast cluster.
client = hazelcast.HazelcastClient()
# Get or create the "distributed-map" on the cluster.
distributed_map = client.get_map("distributed-map")
# Put "key", "value" pair into the "distributed-map" and wait for
# the request to complete.
distributed_map.set("key", "value").result()
# Try to get the value associated with the given key from the cluster
# and attach a callback to be executed once the response for the
# get request is received. Note that, the set request above was
# blocking since it calls ".result()" on the returned Future, whereas
# the get request below is non-blocking.
get_future = distributed_map.get("key")
get_future.add_done_callback(lambda future: print(future.result()))
# Do other operations. The operations below won't wait for
# the get request above to complete.
print("Map size:", distributed_map.size().result())
# Shutdown the client.
client.shutdown()
If you are using Hazelcast and the Python client on the same machine, the default configuration should work out-of-the-box. However, you may need to configure the client to connect to cluster nodes that are running on different machines or to customize client properties.
Configuration¶
import hazelcast
client = hazelcast.HazelcastClient(
cluster_name="cluster-name",
cluster_members=[
"10.90.0.2:5701",
"10.90.0.3:5701",
],
lifecycle_listeners=[
lambda state: print("Lifecycle event >>>", state),
]
)
print("Connected to cluster")
client.shutdown()
See the API documentation of hazelcast.client.HazelcastClient
to learn more about supported configuration options.
Features¶
Distributed, partitioned and queryable in-memory key-value store implementation, called Map
Eventually consistent cache implementation to store a subset of the Map data locally in the memory of the client, called Near Cache
Additional data structures and simple messaging constructs such as Set, MultiMap, Queue, Topic
Cluster-wide unique ID generator, called FlakeIdGenerator
Distributed, CRDT based counter, called PNCounter
Distributed concurrency primitives from CP Subsystem such as FencedLock, Semaphore, AtomicLong
Integration with Hazelcast Viridian
Support for serverless and traditional web service architectures with Unisocket and Smart operation modes
Ability to listen to client lifecycle, cluster state, and distributed data structure events
and many more
HazelcastClient API Documentation¶
- class HazelcastClient(config: Optional[Config] = None, **kwargs)[source]¶
Bases:
object
Hazelcast client instance to access and manipulate distributed data structures on the Hazelcast clusters.
The client can be configured either by:
providing a configuration object as the first parameter of the constructor
from hazelcast import HazelcastClient from hazelcast.config import Config config = Config() config.cluster_name = "a-cluster" client = HazelcastClient(config)
passing configuration options as keyword arguments
from hazelcast import HazelcastClient client = HazelcastClient( cluster_name="a-cluster", )
See the
hazelcast.config.Config
documentation for the possible configuration options.- Parameters:
config – Optional configuration object.
**kwargs – Optional keyword arguments of the client configuration.
- get_executor(name: str) Executor [source]¶
Creates cluster-wide ExecutorService.
- Parameters:
name – Name of the Executor proxy.
- Returns:
Executor proxy for the given name.
- get_flake_id_generator(name: str) FlakeIdGenerator [source]¶
Creates or returns a cluster-wide FlakeIdGenerator.
- Parameters:
name – Name of the FlakeIdGenerator proxy.
- Returns:
FlakeIdGenerator proxy for the given name.
- get_queue(name: str) Queue[ItemType] [source]¶
Returns the distributed queue instance with the specified name.
- Parameters:
name – Name of the distributed queue.
- Returns:
Distributed queue instance with the specified name.
- get_list(name: str) List[ItemType] [source]¶
Returns the distributed list instance with the specified name.
- Parameters:
name – Name of the distributed list.
- Returns:
Distributed list instance with the specified name.
- get_map(name: str) Map[KeyType, ValueType] [source]¶
Returns the distributed map instance with the specified name.
- Parameters:
name – Name of the distributed map.
- Returns:
Distributed map instance with the specified name.
- get_multi_map(name: str) MultiMap[KeyType, ValueType] [source]¶
Returns the distributed MultiMap instance with the specified name.
- Parameters:
name – Name of the distributed MultiMap.
- Returns:
Distributed MultiMap instance with the specified name.
- get_pn_counter(name: str) PNCounter [source]¶
Returns the PN Counter instance with the specified name.
- Parameters:
name – Name of the PN Counter.
- Returns:
Distributed PN Counter instance with the specified name.
- get_reliable_topic(name: str) ReliableTopic[MessageType] [source]¶
Returns the ReliableTopic instance with the specified name.
- Parameters:
name – Name of the ReliableTopic.
- Returns:
Distributed ReliableTopic instance with the specified name.
- get_replicated_map(name: str) ReplicatedMap[KeyType, ValueType] [source]¶
Returns the distributed ReplicatedMap instance with the specified name.
- Parameters:
name – Name of the distributed ReplicatedMap.
- Returns:
Distributed ReplicatedMap instance with the specified name.
- get_ringbuffer(name: str) Ringbuffer[ItemType] [source]¶
Returns the distributed Ringbuffer instance with the specified name.
- Parameters:
name – Name of the distributed Ringbuffer.
- Returns:
Distributed RingBuffer instance with the specified name.
- get_set(name: str) Set[ItemType] [source]¶
Returns the distributed Set instance with the specified name.
- Parameters:
name – Name of the distributed Set.
- Returns:
Distributed Set instance with the specified name.
- get_topic(name: str) Topic[MessageType] [source]¶
Returns the Topic instance with the specified name.
- Parameters:
name – Name of the Topic.
- Returns:
The Topic.
- new_transaction(timeout: float = 120, durability: int = 1, type: int = 1) Transaction [source]¶
- Creates a new Transaction associated with the current thread
using default or given options.
- Parameters:
timeout – The timeout in seconds determines the maximum lifespan of a transaction. So if a transaction is configured with a timeout of 2 minutes, then it will automatically rollback if it hasn’t committed yet.
durability – The durability is the number of machines that can take over if a member fails during a transaction commit or rollback.
type – The transaction type which can be
TWO_PHASE
orONE_PHASE
.
- Returns:
New Transaction associated with the current thread.
- add_distributed_object_listener(listener_func: Callable[[DistributedObjectEvent], None]) Future[str] [source]¶
Adds a listener which will be notified when a new distributed object is created or destroyed.
- Parameters:
listener_func – Function to be called when a distributed object is created or destroyed.
- Returns:
A registration id which is used as a key to remove the listener.
- remove_distributed_object_listener(registration_id: str) Future[bool] [source]¶
Removes the specified distributed object listener.
Returns silently if there is no such listener added before.
- Parameters:
registration_id – The id of registered listener.
- Returns:
True
if registration is removed,False
otherwise.
- get_distributed_objects() Future[List[Proxy]] [source]¶
Returns all distributed objects such as; queue, map, set, list, topic, lock, multimap.
Also, as a side effect, it clears the local instances of the destroyed proxies.
- Returns:
List of instances created by Hazelcast.
- property name: str¶
Name of the client.
- property lifecycle_service: LifecycleService¶
Lifecycle service allows you to check if the client is running and add and remove lifecycle listeners.
- property partition_service: PartitionService¶
Partition service allows you to get partition count, introspect the partition owners, and partition ids of keys.
- property cluster_service: ClusterService¶
Cluster service allows you to get the list of the cluster members and add and remove membership listeners.
- Type:
- property cp_subsystem: CPSubsystem¶
CP Subsystem offers set of in-memory linearizable data structures.
- property sql: SqlService¶
Returns a service to execute distributed SQL queries.
Configuration API Documentation¶
- class Config[source]¶
Bases:
object
Hazelcast client configuration.
- property cluster_members: List[str]¶
Candidate address list that the client will use to establish initial connection.
By default, set to
["127.0.0.1"]
.
- property cluster_name: str¶
Name of the cluster to connect to.
The name is sent as part of the client authentication message and may be verified on the member. By default, set to
dev
.
- property client_name: Optional[str]¶
Name of the client instance.
By default, set to
hz.client_${CLIENT_ID}
, whereCLIENT_ID
starts from0
and it is incremented by1
for each new client.
- property connection_timeout: Union[int, float]¶
Socket timeout value in seconds for the client to connect member nodes.
Setting this to
0
makes the connection blocking. By default, set to5.0
.
- property socket_options: List[Tuple[int, int, Union[int, bytes]]]¶
List of socket option tuples.
The tuples must contain the parameters passed into the
socket.setsockopt()
in the same order.
- property redo_operation: bool¶
When set to
True
, the client will redo the operations that were executing on the server in case if the client lost connection.This can happen because of network problems, or simply because the member died. However, it is not clear whether the operation was performed or not. For idempotent operations this is harmless, but for non-idempotent ones retrying can cause to undesirable effects. Note that the redo can be processed on any member. By default, set to
False
.
- property smart_routing: bool¶
Enables smart mode for the client instead of unisocket client.
Smart clients send key based operations to owner of the keys. Unisocket clients send all operations to a single node. By default, set to
True
.
- property ssl_enabled: bool¶
If it is
True
, SSL is enabled.By default, set to
False
.
- property ssl_cafile: Optional[str]¶
Absolute path of concatenated CA certificates used to validate server’s certificates in PEM format.
When SSL is enabled and
cafile
is not set, a set of default CA certificates from default locations will be used.
- property ssl_certfile: Optional[str]¶
Absolute path of the client certificate in PEM format.
- property ssl_keyfile: Optional[str]¶
Absolute path of the private key file for the client certificate in the PEM format.
If this parameter is
None
, private key will be taken from thecertfile
.
- property ssl_password: Optional[Union[Callable[[], Union[str, bytes]], str, bytes]]¶
Password for decrypting the keyfile if it is encrypted.
The password may be a function to call to get the password. It will be called with no arguments, and it should return a string, bytes, or bytearray. If the return value is a string it will be encoded as UTF-8 before using it to decrypt the key. Alternatively a string, bytes, or bytearray value may be supplied directly as the password.
- property ssl_protocol: int¶
Protocol version used in SSL communication.
By default, set to
TLSv1_2
. See thehazelcast.config.SSLProtocol
for possible values.
- property ssl_ciphers: Optional[str]¶
String in the OpenSSL cipher list format to set the available ciphers for sockets.
More than one cipher can be set by separating them with a colon.
- property ssl_check_hostname: bool¶
When set to
True
, verifies that the hostname in the member’s certificate and the address of the member matches during the handshake.By default, set to
False
.
- property cloud_discovery_token: Optional[str]¶
Discovery token of the Hazelcast Viridian cluster.
When this value is set, Hazelcast Viridian discovery is enabled.
- property async_start: bool¶
Enables non-blocking start mode of the client.
When set to
True
, the client creation will not wait to connect to cluster. The client instance will throw exceptions until it connects to cluster and becomes ready. If set toFalse
, the client will block until a cluster connection established, and it is ready to use the client instance. By default, set toFalse
.
- property reconnect_mode: int¶
Defines how the client reconnects to cluster after a disconnect.
By default, set to
ON
. See thehazelcast.config.ReconnectMode
for possible values.
- property retry_initial_backoff: Union[int, float]¶
Wait period in seconds after the first failure before retrying.
Must be non-negative. By default, set to
1.0
.
- property retry_max_backoff: Union[int, float]¶
Upper bound for the backoff interval in seconds.
Must be non-negative. By default, set to
30.0
.
- property retry_jitter: Union[int, float]¶
Defines how much to randomize backoffs.
At each iteration the calculated back-off is randomized via following method in pseudocode:
Random(-jitter * current_backoff, jitter * current_backoff)
.Must be in range
[0.0, 1.0]
. By default, set to0.0
(no randomization).
- property retry_multiplier: Union[int, float]¶
The factor with which to multiply backoff after a failed retry.
Must be greater than or equal to
1
. By default, set to1.05
.
- property cluster_connect_timeout: Union[int, float]¶
Timeout value in seconds for the client to give up connecting to the cluster.
Must be non-negative or equal to -1. By default, set to -1. -1 means that the client will not stop trying to the target cluster. (infinite timeout)
- property portable_version: int¶
Default value for the portable version if the class does not have the
get_portable_version()
method.Portable versions are used to differentiate two versions of the
hazelcast.serialization.api.Portable
classes that have added or removed fields, or fields with different types.
- property data_serializable_factories: Dict[int, Dict[int, Type[IdentifiedDataSerializable]]]¶
Dictionary of factory id and corresponding
hazelcast.serialization.api.IdentifiedDataSerializable
factories.A factory is simply a dictionary with class id and callable class constructors.
FACTORY_ID = 1 CLASS_ID = 1 class SomeSerializable(IdentifiedDataSerializable): # omitting the implementation pass client = HazelcastClient(data_serializable_factories={ FACTORY_ID: { CLASS_ID: SomeSerializable } })
- property portable_factories: Dict[int, Dict[int, Type[Portable]]]¶
Dictionary of factory id and corresponding
hazelcast.serialization.api.Portable
factories.A factory is simply a dictionary with class id and callable class constructors.
FACTORY_ID = 2 CLASS_ID = 2 class SomeSerializable(Portable): # omitting the implementation pass client = HazelcastClient(portable_factories={ FACTORY_ID: { CLASS_ID: SomeSerializable } })
- property compact_serializers: List[CompactSerializer]¶
List of Compact serializers.
class Foo: pass class FooSerializer(CompactSerializer[Foo]): pass client = HazelcastClient( compact_serializers=[ FooSerializer(), ], )
- property class_definitions: List[ClassDefinition]¶
List of all portable class definitions.
- property check_class_definition_errors: bool¶
When enabled, serialization system will check for class definition errors at start and throw an
hazelcast.errors.HazelcastSerializationError
with error definition.By default, set to
True
.
- property is_big_endian: bool¶
Defines if big-endian is used as the byte order for the serialization.
By default, set to
True
.
- property default_int_type: int¶
Defines how the
int
type is represented on the member side.By default, it is serialized as
INT
(32
bits). See thehazelcast.config.IntType
for possible values.
- property global_serializer: Optional[Type[StreamSerializer]]¶
Defines the global serializer.
This serializer is registered as a fallback serializer to handle all other objects if a serializer cannot be located for them.
- property custom_serializers: Dict[Type[Any], Type[StreamSerializer]]¶
Dictionary of class and the corresponding custom serializers.
class SomeClass: # omitting the implementation pass class SomeClassSerializer(StreamSerializer): # omitting the implementation pass client = HazelcastClient(custom_serializers={ SomeClass: SomeClassSerializer })
- property near_caches: Dict[str, NearCacheConfig]¶
Dictionary of near cache names to the corresponding near cache configurations.
See the
hazelcast.config.NearCacheConfig
for the possible configuration options.The near cache configuration can also be passed as a dictionary of configuration option name to value. When an option is missing from the dictionary configuration, it will be set to its default value.
- property load_balancer: Optional[LoadBalancer]¶
Load balancer implementation for the client.
- property membership_listeners: List[Tuple[Optional[Callable[[MemberInfo], None]], Optional[Callable[[MemberInfo], None]]]]¶
List of membership listener tuples.
Tuples must be of size
2
. The first element will be the function to be called when a member is added, and the second element will be the function to be called when the member is removed with thehazelcast.core.MemberInfo
as the only parameter.Any of the elements can be
None
, but not at the same time.
- property lifecycle_listeners: List[Callable[[str], None]]¶
List of lifecycle listeners.
Listeners will be called with the new lifecycle state as the only parameter when the client changes lifecycle states.
- property flake_id_generators: Dict[str, FlakeIdGeneratorConfig]¶
Dictionary of flake id generator names to the corresponding flake id generator configurations.
See the
hazelcast.config.FlakeIdGeneratorConfig
for the possible configuration options.The flake id generator configuration can also be passed as a dictionary of configuration option name to value. When an option is missing from the dictionary configuration, it will be set to its default value.
- property reliable_topics: Dict[str, ReliableTopicConfig]¶
Dictionary of reliable topic names to the corresponding reliable topic configurations.
See the
hazelcast.config.ReliableTopicConfig
for the possible configuration options.The reliable topic configuration can also be passed as a dictionary of configuration option name to value. When an option is missing from the dictionary configuration, it will be set to its default value.
- property labels: List[str]¶
Labels for the client to be sent to the cluster.
- property heartbeat_interval: Union[int, float]¶
Time interval between the heartbeats sent by the client to the member nodes in seconds.
By default, set to
5.0
.
- property heartbeat_timeout: Union[int, float]¶
If there is no message passing between the client and a member within the given time via this property in seconds, the connection will be closed.
By default, set to
60.0
.
- property invocation_timeout: Union[int, float]¶
When an invocation gets an exception because
Member throws an exception.
Connection between the client and member is closed.
The client’s heartbeat requests are timed out.
Time passed since invocation started is compared with this property. If the time is already passed, then the exception is delegated to the user. If not, the invocation is retried. Note that, if invocation gets no exception, and it is a long-running one, then it will not get any exception, no matter how small this timeout is set. Time unit is in seconds.
By default, set to
120.0
.
- property invocation_retry_pause: Union[int, float]¶
Pause time between each retry cycle of an invocation in seconds.
By default, set to
1.0
.
- property statistics_enabled: bool¶
When set to
True
, the client statistics collection is enabled.By default, set to
False
.
- property statistics_period: Union[int, float]¶
The period in seconds the statistics run.
- property shuffle_member_list: bool¶
The client shuffles the given member list to prevent all clients to connect to the same node when this property is set to
True
.When it is set to
False
, the client tries to connect to the nodes in the given order.By default, set to
True
.
- property backup_ack_to_client_enabled: bool¶
Enables the client to get backup acknowledgements directly from the member that backups are applied, which reduces number of hops and increases performance for smart clients.
This option has no effect for unisocket clients.
By default, set to
True
(enabled).
- property operation_backup_timeout: Union[int, float]¶
If an operation has backups, defines how long the invocation will wait for acks from the backup replicas in seconds.
If acks are not received from some backups, there won’t be any rollback on other successful replicas.
By default, set to
5.0
.
- property fail_on_indeterminate_operation_state: bool¶
When enabled, if an operation has sync backups and acks are not received from backup replicas in time, or the member which owns primary replica of the target partition leaves the cluster, then the invocation fails with
hazelcast.errors.IndeterminateOperationStateError
.However, even if the invocation fails, there will not be any rollback on other successful replicas.
By default, set to
False
(do not fail).
- property creds_username: Optional[str]¶
Username for credentials authentication (Enterprise feature).
- property creds_password: Optional[str]¶
Password for credentials authentication (Enterprise feature).
- property token_provider: Optional[TokenProvider]¶
Token provider for custom authentication (Enterprise feature).
Note that
token_provider
setting has priority over credentials settings.
- property use_public_ip: bool¶
When set to
True
, the client uses the public IP addresses reported by members while connecting to them, if available.By default, set to
False
.
- classmethod from_dict(d: Dict[str, Any]) Config [source]¶
Constructs a configuration object out of the given dictionary.
The dictionary items must be valid pairs of configuration option name to its value.
If a configuration is missing from the dictionary, the default value for it will be used.
- Parameters:
d – Dictionary that describes the configuration.
- Returns:
The constructed configuration object.
- class NearCacheConfig[source]¶
Bases:
object
- property invalidate_on_change: bool¶
Enables cluster-assisted invalidate on change behavior.
When set to
True
, entries are invalidated when they are changed in cluster.By default, set to
True
.
- property in_memory_format: int¶
Specifies in which format data will be stored in the Near Cache.
See the
hazelcast.config.InMemoryFormat
for possible values.By default, set to
BINARY
.
- property time_to_live: Optional[Union[int, float]]¶
Maximum number of seconds that an entry can stay in cache.
When not set, entries won’t be evicted due to expiration.
- property max_idle: Optional[Union[int, float]]¶
Maximum number of seconds that an entry can stay in the Near Cache until it is accessed.
When not set, entries won’t be evicted due to inactivity.
- property eviction_policy: int¶
Defines eviction policy configuration.
See the:class:hazelcast.config.EvictionPolicy for possible values.
By default, set to
LRU
.
- property eviction_max_size: int¶
Defines maximum number of entries kept in the memory before eviction kicks in.
By default, set to
10000
.
- property eviction_sampling_count: int¶
Number of random entries that are evaluated to see if some of them are already expired.
By default, set to
8
.
- property eviction_sampling_pool_size: int¶
Size of the pool for eviction candidates.
The pool is kept sorted according to the eviction policy. By default, set to
16
.
- classmethod from_dict(d: Dict[str, Any]) NearCacheConfig [source]¶
Constructs a configuration object out of the given dictionary.
The dictionary items must be valid pairs of configuration option name to its value.
If a configuration is missing from the dictionary, the default value for it will be used.
- Parameters:
d – Dictionary that describes the configuration.
- Returns:
The constructed configuration object.
- class FlakeIdGeneratorConfig[source]¶
Bases:
object
- property prefetch_count: int¶
Defines how many IDs are pre-fetched on the background when a new flake id is requested from the cluster.
Should be in the range
1..100000
. By default, set to100
.
- property prefetch_validity: Union[int, float]¶
Defines for how long the pre-fetched IDs can be used.
If this time elapsed, a new batch of IDs will be fetched. Time unit is in seconds. By default, set to
600
(10 minutes).The IDs contain timestamp component, which ensures rough global ordering of IDs. If an ID is assigned to an object that was created much later, it will be much out of order. If you don’t care about ordering, set this value to
0
for unlimited ID validity.
- classmethod from_dict(d: Dict[str, Any]) FlakeIdGeneratorConfig [source]¶
Constructs a configuration object out of the given dictionary.
The dictionary items must be valid pairs of configuration option name to its value.
If a configuration is missing from the dictionary, the default value for it will be used.
- Parameters:
d – Dictionary that describes the configuration.
- Returns:
The constructed configuration object.
- class ReliableTopicConfig[source]¶
Bases:
object
- property read_batch_size: int¶
Number of messages the reliable topic will try to read in batch.
It will get at least one, but if there are more available, then it will try to get more to increase throughput. By default, set to
10
.
- property overload_policy: int¶
Policy to handle an overloaded topic.
By default, set to
BLOCK
. See thehazelcast.config.TopicOverloadPolicy
for possible values.
- classmethod from_dict(d: Dict[str, Any]) ReliableTopicConfig [source]¶
Constructs a configuration object out of the given dictionary.
The dictionary items must be valid pairs of configuration option name to its value.
If a configuration is missing from the dictionary, the default value for it will be used.
- Parameters:
d – Dictionary that describes the configuration.
- Returns:
The constructed configuration object.
- class IntType[source]¶
Bases:
object
Integer type options that can be used by serialization service.
- VAR = 0¶
Integer types will be serialized as 8, 16, 32, 64 bit integers or as Java BigInteger according to their value. This option may cause problems when the Python client is used in conjunction with statically typed language clients such as Java or .NET.
- BYTE = 1¶
Integer types will be serialized as a 8 bit integer(as Java byte)
- SHORT = 2¶
Integer types will be serialized as a 16 bit integer(as Java short)
- INT = 3¶
Integer types will be serialized as a 32 bit integer(as Java int)
- LONG = 4¶
Integer types will be serialized as a 64 bit integer(as Java long)
- BIG_INT = 5¶
Integer types will be serialized as Java BigInteger. This option can handle integer types which are less than -2^63 or greater than or equal to 2^63. However, when this option is set, serializing/de-serializing integer types is costly.
- class EvictionPolicy[source]¶
Bases:
object
Near Cache eviction policy options.
- NONE = 0¶
No eviction.
- LRU = 1¶
Least Recently Used items will be evicted.
- LFU = 2¶
Least frequently Used items will be evicted.
- RANDOM = 3¶
Items will be evicted randomly.
- class InMemoryFormat[source]¶
Bases:
object
Near Cache in memory format of the values.
- BINARY = 0¶
As Hazelcast serialized bytearray data.
- OBJECT = 1¶
As the actual object.
- class SSLProtocol[source]¶
Bases:
object
SSL protocol options.
TLSv1_3 requires at least Python 3.7 build with OpenSSL 1.1.1+
- SSLv2 = 0¶
SSL 2.0 Protocol. RFC 6176 prohibits SSL 2.0. Please use TLSv1+.
- SSLv3 = 1¶
SSL 3.0 Protocol. RFC 7568 prohibits SSL 3.0. Please use TLSv1+.
- TLSv1 = 2¶
TLS 1.0 Protocol described in RFC 2246.
- TLSv1_1 = 3¶
TLS 1.1 Protocol described in RFC 4346.
- TLSv1_2 = 4¶
TLS 1.2 Protocol described in RFC 5246.
- TLSv1_3 = 5¶
TLS 1.3 Protocol described in RFC 8446.
- class QueryConstants[source]¶
Bases:
object
Contains constants for Query.
- KEY_ATTRIBUTE_NAME = '__key'¶
Attribute name of the key.
- THIS_ATTRIBUTE_NAME = 'this'¶
Attribute name of the value.
- class UniqueKeyTransformation[source]¶
Bases:
object
Defines an assortment of transformations which can be applied to unique key values.
- OBJECT = 0¶
Extracted unique key value is interpreted as an object value. Non-negative unique ID is assigned to every distinct object value.
- LONG = 1¶
Extracted unique key value is interpreted as a whole integer value of byte, short, int or long type. The extracted value is up casted to long (if necessary) and unique non-negative ID is assigned to every distinct value.
- RAW = 2¶
Extracted unique key value is interpreted as a whole integer value of byte, short, int or long type. The extracted value is up casted to long (if necessary) and the resulting value is used directly as an ID.
- class IndexType[source]¶
Bases:
object
Type of the index.
- SORTED = 0¶
Sorted index. Can be used with equality and range predicates.
- HASH = 1¶
Hash index. Can be used with equality predicates.
- BITMAP = 2¶
Bitmap index. Can be used with equality predicates.
- class ReconnectMode[source]¶
Bases:
object
Reconnect options.
- OFF = 0¶
Prevent reconnect to cluster after a disconnect.
- ON = 1¶
Reconnect to cluster by blocking invocations.
- ASYNC = 2¶
Reconnect to cluster without blocking invocations. Invocations will receive ClientOfflineError
- class TopicOverloadPolicy[source]¶
Bases:
object
A policy to deal with an overloaded topic; a topic where there is no place to store new messages.
The reliable topic uses a
hazelcast.proxy.ringbuffer.Ringbuffer
to store the messages. A ringbuffer doesn’t track where readers are, so it has no concept of a slow consumers. This provides many advantages like high performance reads, but it also gives the ability to the reader to re-read the same message multiple times in case of an error.A ringbuffer has a limited, fixed capacity. A fast producer may overwrite old messages that are still being read by a slow consumer. To prevent this, we may configure a time-to-live on the ringbuffer.
Once the time-to-live is configured, the
TopicOverloadPolicy
controls how the publisher is going to deal with the situation that a ringbuffer is full and the oldest item in the ringbuffer is not old enough to get overwritten.Keep in mind that this retention period (time-to-live) can keep messages from being overwritten, even though all readers might have already completed reading.
- DISCARD_OLDEST = 0¶
Using this policy, a message that has not expired can be overwritten.
No matter the retention period set, the overwrite will just overwrite the item.
This can be a problem for slow consumers because they were promised a certain time window to process messages. But it will benefit producers and fast consumers since they are able to continue. This policy sacrifices the slow producer in favor of fast producers/consumers.
- DISCARD_NEWEST = 1¶
The message that was to be published is discarded.
- BLOCK = 2¶
The caller will wait till there space in the ringbuffer.
- ERROR = 3¶
The publish call immediately fails.
API Documentation¶
Aggregator¶
- class Aggregator(*args, **kwds)[source]¶
Bases:
Generic
[AggregatorResultType
]Marker base class for all aggregators.
Aggregators allow computing a value of some function (e.g sum or max) over the stored map entries. The computation is performed in a fully distributed manner, so no data other than the computed value is transferred to the client, making the computation fast.
- count(attribute_path: Optional[str] = None) Aggregator[int] [source]¶
Creates an aggregator that counts the input values.
Accepts
None
input values andNone
extracted values.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that counts the input values.
- distinct(attribute_path: Optional[str] = None) Aggregator[Set[AggregatorResultType]] [source]¶
Creates an aggregator that calculates the distinct set of input values.
Accepts
None
input values andNone
extracted values.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the distinct set of input values.
- double_avg(attribute_path: Optional[str] = None) Aggregator[float] [source]¶
Creates an aggregator that calculates the average of the input values.
Does NOT accept
None
input values orNone
extracted values.Since the server-side implementation is in Java, values stored in the Map must be of type
double
(primitive or boxed) in Java or of a type that can be converted to that. That means, one should be able to use this aggregator withfloat
orint
values sent from the Python client unless they are out of range fordouble
type in Java.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the average of the input values.
- double_sum(attribute_path: Optional[str] = None) Aggregator[float] [source]¶
Creates an aggregator that calculates the sum of the input values.
Does NOT accept
None
input values orNone
extracted values.Since the server-side implementation is in Java, values stored in the Map must be of type
double
(primitive or boxed) in Java or of a type that can be converted to that. That means, one should be able to use this aggregator withfloat
orint
values sent from the Python client unless they are out of range fordouble
type in Java.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the sum of the input values.
- fixed_point_sum(attribute_path: Optional[str] = None) Aggregator[int] [source]¶
Creates an aggregator that calculates the sum of the input values.
Does NOT accept
None
input values orNone
extracted values.Accepts generic number input values. That means, one should be able to use this aggregator with
float
orint
value sent from the Python client unless they are out of range forlong
type in Java.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the sum of the input values.
- floating_point_sum(attribute_path: Optional[str] = None) Aggregator[float] [source]¶
Creates an aggregator that calculates the sum of the input values.
Does NOT accept
None
input values orNone
extracted values.Accepts generic number input values. That means, one should be able to use this aggregator with
float
orint
value sent from the Python client unless they are out of range fordouble
type in Java.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the sum of the input values.
- int_avg(attribute_path: Optional[str] = None) Aggregator[int] [source]¶
Creates an aggregator that calculates the average of the input values.
Does NOT accept
None
input values orNone
extracted values.Since the server-side implementation is in Java, values stored in the Map must be of type
int
(primitive or boxed) in Java or of a type that can be converted to that. That means, one should be able to use this aggregator withint
values sent from the Python client unless they are out of range forint
type in Java.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the average of the input values.
- int_sum(attribute_path: Optional[str] = None) Aggregator[int] [source]¶
Creates an aggregator that calculates the sum of the input values.
Does NOT accept
None
input values orNone
extracted values.Since the server-side implementation is in Java, values stored in the Map must be of type
int
(primitive or boxed) in Java or of a type that can be converted to that. That means, one should be able to use this aggregator withint
values sent from the Python client unless they are out of range forint
type in Java.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the sum of the input values.
- long_avg(attribute_path: Optional[str] = None) Aggregator[int] [source]¶
Creates an aggregator that calculates the average of the input values.
Does NOT accept
None
input values orNone
extracted values.Since the server-side implementation is in Java, values stored in the Map must be of type
long
(primitive or boxed) in Java or of a type that can be converted to that. That means, one should be able to use this aggregator withint
values sent from the Python client unless they are out of range forlong
type in Java.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the average of the input values.
- long_sum(attribute_path: Optional[str] = None) Aggregator[int] [source]¶
Creates an aggregator that calculates the sum of the input values.
Does NOT accept
None
input values orNone
extracted values.Since the server-side implementation is in Java, values stored in the Map must be of type
long
(primitive or boxed) in Java or of a type that can be converted to that. That means, one should be able to use this aggregator withint
values sent from the Python client unless they are out of range forlong
type in Java.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the sum of the input values.
- max_(attribute_path: Optional[str] = None) Aggregator[AggregatorResultType] [source]¶
Creates an aggregator that calculates the max of the input values.
Accepts
None
input values andNone
extracted values.Since the server-side implementation is in Java, values stored in the Map must implement the
Comparable
interface in Java. That means, one should be able to use this aggregator with most of the primitive values sent from the Python client, as Java implements this interface for the equivalents of types likeint
,str
, andfloat
.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the max of the input values.
- min_(attribute_path: Optional[str] = None) Aggregator[AggregatorResultType] [source]¶
Creates an aggregator that calculates the min of the input values.
Accepts
None
input values andNone
extracted values.Since the server-side implementation is in Java, values stored in the Map must implement the
Comparable
interface in Java. That means, one should be able to use this aggregator with most of the primitive values sent from the Python client, as Java implements this interface for the equivalents of types likeint
,str
, andfloat
.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the min of the input values.
- number_avg(attribute_path: Optional[str] = None) Aggregator[float] [source]¶
Creates an aggregator that calculates the average of the input values.
Does NOT accept
None
input values orNone
extracted values.Accepts generic number input values. That means, one should be able to use this aggregator with
float
orint
value sent from the Python client unless they are out of range fordouble
type in Java.- Parameters:
attribute_path – Extracts values from this path, if given.
- Returns:
An aggregator that calculates the average of the input values.
- max_by(attribute_path: str) Aggregator[MapEntry[KeyType, ValueType]] [source]¶
Creates an aggregator that calculates the max of the input values extracted from the given
attribute_path
and returns the input item containing the maximum value. If multiple items contain the maximum value, any of them is returned.Accepts
None
input values andNone
extracted values.Since the server-side implementation is in Java, values stored in the Map must implement the
Comparable
interface in Java. That means, one should be able to use this aggregator with most of the primitive values sent from the Python client, as Java implements this interface for the equivalents of types likeint
,str
, andfloat
.- Parameters:
attribute_path – Path to extract values from.
- Returns:
An aggregator that calculates the input value containing the maximum value extracted from the path.
- min_by(attribute_path: str) Aggregator[MapEntry[KeyType, ValueType]] [source]¶
Creates an aggregator that calculates the min of the input values extracted from the given
attribute_path
and returns the input item containing the minimum value. If multiple items contain the minimum value, any of them is returned.Accepts
None
input values andNone
extracted values.Since the server-side implementation is in Java, values stored in the Map must implement the
Comparable
interface in Java. That means, one should be able to use this aggregator with most of the primitive values sent from the Python client, as Java implements this interface for the equivalents of types likeint
,str
, andfloat
.- Parameters:
attribute_path – Path to extract values from.
- Returns:
An aggregator that calculates the input value containing the minimum value extracted from the path.
Hazelcast Cluster¶
- class ClusterService(internal_cluster_service)[source]¶
Bases:
object
Cluster service for Hazelcast clients.
It provides access to the members in the cluster and one can register for changes in the cluster members.
- add_listener(member_added: Optional[Callable[[MemberInfo], None]] = None, member_removed: Optional[Callable[[MemberInfo], None]] = None, fire_for_existing=False) str [source]¶
Adds a membership listener to listen for membership updates.
It will be notified when a member is added to the cluster or removed from the cluster. There is no check for duplicate registrations, so if you register the listener twice, it will get events twice.
- Parameters:
member_added – Function to be called when a member is added to the cluster.
member_removed – Function to be called when a member is removed from the cluster.
fire_for_existing – Whether or not fire member_added for existing members.
- Returns:
Registration id of the listener which will be used for removing this listener.
- remove_listener(registration_id: str) bool [source]¶
Removes the specified membership listener.
- Parameters:
registration_id – Registration id of the listener to be removed.
- Returns:
True
if the registration is removed,False
otherwise.
- get_members(member_selector: Optional[Callable[[MemberInfo], bool]] = None) List[MemberInfo] [source]¶
Lists the current members in the cluster.
Every member in the cluster returns the members in the same order. To obtain the oldest member in the cluster, you can retrieve the first item in the list.
- Parameters:
member_selector – Function to filter members to return. If not provided, the returned list will contain all the available cluster members.
- Returns:
Current members in the cluster
Core¶
Hazelcast Core objects and constants.
- class MemberInfo(address: Address, member_uuid: UUID, attributes: Dict[str, str], lite_member: bool, version: MemberVersion, _, address_map: Dict[EndpointQualifier, Address])[source]¶
Bases:
object
Represents a member in the cluster with its address, uuid, lite member status, attributes, version, and address map.
- address¶
Address of the member.
- uuid¶
UUID of the member.
- attributes¶
Configured attributes of the member.
- lite_member¶
True
if the member is a lite member,False
otherwise. Lite members do not own any partition.
- version¶
Hazelcast codebase version of the member.
- address_map¶
Dictionary of server socket addresses per
EndpointQualifier
of this member.
- class Address(host: str, port: int)[source]¶
Bases:
object
Represents an address of a member in the cluster.
- host¶
Host of the address.
- port¶
Port of the address.
- class ProtocolType[source]¶
Bases:
object
Types of server sockets.
A member typically responds to several types of protocols for member-to-member, client-member protocol, WAN communication etc. The default configuration uses a single server socket to listen for all kinds of protocol types configured, while Advanced Network Config of the server allows configuration of multiple server sockets.
- MEMBER = 0¶
Type of member server sockets.
- CLIENT = 1¶
Type of client server sockets.
- WAN = 2¶
Type of WAN server sockets.
- REST = 3¶
Type of REST server sockets.
- MEMCACHE = 4¶
Type of Memcached server sockets.
- class EndpointQualifier(protocol_type: int, identifier: Optional[str])[source]¶
Bases:
object
Uniquely identifies groups of network connections sharing a common
ProtocolType
and the same network settings, when Hazelcast server is configured with Advanced Network Configuration enabled.In some cases, just the
ProtocolType
is enough (e.g. since there can be only a single member server socket).When just the
ProtocolType
is not enough (for example when configuring outgoing WAN connections to 2 different target clusters), anidentifier
is used to uniquely identify the network configuration.- property protocol_type: int¶
Protocol type of the endpoint.
- property identifier: Optional[str]¶
Unique identifier for same-protocol-type endpoints.
- class DistributedObjectEventType[source]¶
Bases:
object
Type of the distributed object event.
- CREATED = 'CREATED'¶
DistributedObject is created.
- DESTROYED = 'DESTROYED'¶
DistributedObject is destroyed.
- class DistributedObjectEvent(name: str, service_name: str, event_type: str, source: UUID)[source]¶
Bases:
object
Distributed Object Event
- name¶
Name of the distributed object.
- service_name¶
Service name of the distributed object.
- event_type¶
Event type. Either
CREATED
orDESTROYED
.
- source¶
UUID of the member that fired the event.
- class SimpleEntryView(key: KeyType, value: ValueType, cost: int, creation_time: int, expiration_time: int, hits: int, last_access_time: int, last_stored_time: int, last_update_time: int, version: int, ttl: int, max_idle: int)[source]¶
Bases:
Generic
[KeyType
,ValueType
]EntryView represents a readonly view of a map entry.
- key¶
The key of the entry.
- value¶
The value of the entry.
- cost¶
The cost in bytes of the entry.
- creation_time¶
The creation time of the entry.
- expiration_time¶
The expiration time of the entry.
- hits¶
Number of hits of the entry.
- last_access_time¶
The last access time for the entry.
- last_stored_time¶
The last store time for the value.
- last_update_time¶
The last time the value was updated.
- version¶
The version of the entry.
- ttl¶
The last set time to live milliseconds.
- max_idle¶
The last set max idle time in milliseconds.
- class HazelcastJsonValue(value: Any)[source]¶
Bases:
object
HazelcastJsonValue is a wrapper for JSON formatted strings.
It is preferred to store HazelcastJsonValue instead of Strings for JSON formatted strings. Users can run predicates and use indexes on the attributes of the underlying JSON strings.
HazelcastJsonValue is queried using Hazelcast’s querying language.
In terms of querying, numbers in JSON strings are treated as either Long or Double in the Java side. str, bool and None are treated as String, boolean and null respectively.
HazelcastJsonValue keeps given string as it is. Strings are not checked for being valid. Ill-formatted JSON strings may cause false positive or false negative results in queries.
HazelcastJsonValue can also be constructed from JSON serializable objects. In that case, objects are converted to JSON strings and stored as such. If an error occurs during the conversion, it is raised directly.
None values are not allowed.
CP Subsystem¶
- class CPSubsystem(context)[source]¶
Bases:
object
CP Subsystem is a component of Hazelcast that builds a strongly consistent layer for a set of distributed data structures.
Its APIs can be used for implementing distributed coordination use cases, such as leader election, distributed locking, synchronization, and metadata management.
Its data structures are CP with respect to the CAP principle, i.e., they always maintain linearizability and prefer consistency over availability during network partitions. Besides network partitions, CP Subsystem withstands server and client failures.
Data structures in CP Subsystem run in CP groups. Each CP group elects its own Raft leader and runs the Raft consensus algorithm independently.
The CP data structures differ from the other Hazelcast data structures in two aspects. First, an internal commit is performed on the METADATA CP group every time you fetch a proxy from this interface. Hence, callers should cache returned proxy objects. Second, if you call
destroy()
on a CP data structure proxy, that data structure is terminated on the underlying CP group and cannot be reinitialized until the CP group is force-destroyed. For this reason, please make sure that you are completely done with a CP data structure before destroying its proxy.- get_atomic_long(name: str) AtomicLong [source]¶
Returns the distributed AtomicLong instance with given name.
The instance is created on CP Subsystem.
If no group name is given within the
name
argument, then the AtomicLong instance will be created on the default CP group. If a group name is given, like.get_atomic_long("myLong@group1")
, the given group will be initialized first, if not initialized already, and then the instance will be created on this group.- Parameters:
name – Name of the AtomicLong.
- Returns:
The AtomicLong proxy for the given name.
- get_atomic_reference(name: str) AtomicReference [source]¶
Returns the distributed AtomicReference instance with given name.
The instance is created on CP Subsystem.
If no group name is given within the
name
argument, then the AtomicLong instance will be created on the DEFAULT CP group. If a group name is given, like.get_atomic_reference("myRef@group1")
, the given group will be initialized first, if not initialized already, and then the instance will be created on this group.- Parameters:
name – Name of the AtomicReference.
- Returns:
The AtomicReference proxy for the given name.
- get_count_down_latch(name: str) CountDownLatch [source]¶
Returns the distributed CountDownLatch instance with given name.
The instance is created on CP Subsystem.
If no group name is given within the
name
argument, then the CountDownLatch instance will be created on the DEFAULT CP group. If a group name is given, like.get_count_down_latch("myLatch@group1")
, the given group will be initialized first, if not initialized already, and then the instance will be created on this group.- Parameters:
name – Name of the CountDownLatch.
- Returns:
The CountDownLatch proxy for the given name.
- get_lock(name: str) FencedLock [source]¶
Returns the distributed FencedLock instance with given name.
The instance is created on CP Subsystem.
If no group name is given within the
name
argument, then the FencedLock instance will be created on the DEFAULT CP group. If a group name is given, like.get_lock("myLock@group1")
, the given group will be initialized first, if not initialized already, and then the instance will be created on this group.- Parameters:
name – Name of the FencedLock
- Returns:
The FencedLock proxy for the given name.
- get_semaphore(name: str) Semaphore [source]¶
Returns the distributed Semaphore instance with given name.
The instance is created on CP Subsystem.
If no group name is given within the
name
argument, then the Semaphore instance will be created on the DEFAULT CP group. If a group name is given, like.get_semaphore("mySemaphore@group1")
, the given group will be initialized first, if not initialized already, and then the instance will be created on this group.- Parameters:
name – Name of the Semaphore
- Returns:
The Semaphore proxy for the given name.
DBAPI-2¶
- class Type(value)[source]¶
Bases:
Enum
Type is the column type
- NULL = 0¶
- STRING = 1¶
- BOOLEAN = 2¶
- DATE = 3¶
- TIME = 4¶
- DATETIME = 5¶
- INTEGER = 6¶
- FLOAT = 7¶
- DECIMAL = 8¶
- JSON = 9¶
- OBJECT = 10¶
- class ColumnDescription(name, type, display_size, internal_size, precision, scale, null_ok)¶
Bases:
tuple
ColumnDescription provides name, type and nullability information
Create new instance of ColumnDescription(name, type, display_size, internal_size, precision, scale, null_ok)
- property display_size¶
Alias for field number 2
- property internal_size¶
Alias for field number 3
- property name¶
Alias for field number 0
- property null_ok¶
Alias for field number 6
- property precision¶
Alias for field number 4
- property scale¶
Alias for field number 5
- property type¶
Alias for field number 1
- class Cursor(conn: Connection)[source]¶
Bases:
object
Cursor is a database cursor object
This class should not be initiated directly. Use connection.cursor() method to create one.
- property connection: Connection¶
Returns the Connection object that created this cursor
- Returns:
The Connection of this cursor
- property description: Optional[List[ColumnDescription]]¶
Returns the descriptions of the columns
Get the descriptions after calling execute.
- Returns:
The list of column descriptions.
- property rowcount: int¶
Returns the number of rows in the result.
This is not supported by this driver and always
-1
is returned.- Returns:
-1
- property rownumber: Optional[int]¶
Returns the index of the cursor in the result set
- Returns:
0-based index of the cursor in the result set.
- execute(operation: str, params: Optional[Tuple] = None) None [source]¶
Executes the given query with optional parameters
- Parameters:
operation – A SQL string. Use question mark (
?
) as the placeholder if necessary.params – Optional tuple that contains the actual parameters to replace the placeholders in the query.
- executemany(operation: str, seq_of_params: Sequence[Tuple]) None [source]¶
Runs the given query with the list of parameters
Calling
executemany(sql, [params1, params2, ...]
is equivalent toexecute(sql, params1), execute(sql, params2), ...
- Parameters:
operation – A SQL string. Use question mark (
?
) as the placeholder if necessary.seq_of_params – Optional list of tuples that contains the actual parameters to replace the placeholders in the query.
- fetchone() Optional[SqlRow] [source]¶
Fetches a single row from the result
- Returns:
A single row if there are rows in the result or
None
.
- fetchmany(size: Optional[int] = None) List[SqlRow] [source]¶
Fetches the given number of rows from the result
- Parameters:
size – Optional number of rows to return.
- Returns:
List of rows. The list will have at most size items.
- class Connection(config: Config)[source]¶
Bases:
object
Connection object provides connection to the Hazelcast cluster
This class should not be initiated directly. Use
connect
method to create an instance.- cursor() Cursor [source]¶
Creates and returns a new cursor object
- Returns:
Cursor object that uses this connection.
- property Error¶
- property Warning¶
- property InterfaceError¶
- property DatabaseError¶
- property InternalError¶
- property OperationalError¶
- property ProgrammingError¶
- property IntegrityError¶
- property DataError¶
- property NotSupportedError¶
- connect(config=None, *, dsn='', user: Optional[str] = None, password: Optional[str] = None, host: Optional[str] = None, port: Optional[int] = None, cluster_name: Optional[str] = None) Connection [source]¶
Creates a new Connection to the cluster
- Parameters:
config – A Config object
dsn – Dota Source Name in the following format:
hz://[user:password]@addr1:port1[?opt1=value1[&opt2=value2 ...]]
user – Optional user name for authenticating to the cluster.
password – Optional password for authenticating to the cluster.
host – Hostname or IP address of the cluster. By default
localhost
.port – Port of the cluster. By default
5701
.cluster_name – Name of the cluster. By default
dev
.
- Returns:
Connection object.
- exception InternalError[source]¶
Bases:
DatabaseError
- exception OperationalError[source]¶
Bases:
DatabaseError
- exception ProgrammingError[source]¶
Bases:
DatabaseError
- exception IntegrityError[source]¶
Bases:
DatabaseError
- exception DataError[source]¶
Bases:
DatabaseError
- exception NotSupportedError[source]¶
Bases:
DatabaseError
Errors¶
- exception HazelcastError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
Exception
General HazelcastError class.
- exception ArrayIndexOutOfBoundsError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ArrayStoreError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception AuthenticationError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception CacheNotExistsError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception CallerNotMemberError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception CancellationError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ClassCastError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ClassNotFoundError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ConcurrentModificationError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ConfigMismatchError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ConfigurationError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception DistributedObjectDestroyedError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception DuplicateInstanceNameError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception HazelcastEOFError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ExecutionError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception HazelcastInstanceNotActiveError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception HazelcastOverloadError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception HazelcastSerializationError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception HazelcastIOError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception IllegalArgumentError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception IllegalAccessException(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception IllegalAccessError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception IllegalMonitorStateError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception IllegalStateError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception IllegalThreadStateError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception IndexOutOfBoundsError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception HazelcastInterruptedError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception InvalidAddressError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception InvalidConfigurationError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception MemberLeftError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NegativeArraySizeError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NoSuchElementError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NotSerializableError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NullPointerError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception OperationTimeoutError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception PartitionMigratingError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception QueryError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception QueryResultSizeExceededError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception SplitBrainProtectionError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ReachedMaxSizeError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception RejectedExecutionError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ResponseAlreadySentError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception RetryableHazelcastError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception RetryableIOError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception HazelcastRuntimeError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception SecurityError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception SocketError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception StaleSequenceError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception TargetDisconnectedError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception TargetNotMemberError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception HazelcastTimeoutError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception TopicOverloadError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception TransactionError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception TransactionNotActiveError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception TransactionTimedOutError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception URISyntaxError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception UTFDataFormatError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception UnsupportedOperationError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception WrongTargetError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception XAError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception AccessControlError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception LoginError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception UnsupportedCallbackError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NoDataMemberInClusterError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ReplicatedMapCantBeCreatedOnLiteMemberError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception MaxMessageSizeExceededError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception WANReplicationQueueFullError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception HazelcastAssertionError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception OutOfMemoryError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception StackOverflowError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NativeOutOfMemoryError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ServiceNotFoundError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception StaleTaskIdError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception DuplicateTaskError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception StaleTaskError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception LocalMemberResetError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception IndeterminateOperationStateError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NodeIdOutOfRangeError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception TargetNotReplicaError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception MutationDisallowedError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ConsistencyLostError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception HazelcastClientNotActiveError(message='Client is not active')[source]¶
Bases:
HazelcastError
- exception HazelcastCertificationError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception ClientOfflineError[source]¶
Bases:
HazelcastError
- exception ClientNotAllowedInClusterError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception VersionMismatchError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NoSuchMethodError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NoSuchMethodException(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NoSuchFieldError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NoSuchFieldException(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NoClassDefFoundError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception UndefinedErrorCodeError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception SessionExpiredError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception WaitKeyCancelledError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception LockAcquireLimitReachedError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception LockOwnershipLostError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception CPGroupDestroyedError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception CannotReplicateError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception LeaderDemotedError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception StaleAppendRequestError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception NotLeaderError(message: Optional[str] = None, cause: Optional[Exception] = None)[source]¶
Bases:
HazelcastError
- exception InvocationMightContainCompactDataError[source]¶
Bases:
HazelcastError
Signals that the invocation might contain Compact serialized data, and it would not be safe to send that invocation now to make sure that the invariant regarding not sending the data before the schemas are hold while the client reconnects or retries urgent invocations.
Future¶
- class Future[source]¶
Bases:
Generic
[ResultType
]Future is used for representing an asynchronous computation result.
- set_result(result: ResultType) None [source]¶
Sets the result of the Future.
- Parameters:
result – Result of the Future.
- set_exception(exception: Exception, traceback: Optional[TracebackType] = None) None [source]¶
Sets the exception for this Future in case of errors.
- Parameters:
exception – Exception to raise in case of error.
traceback – Traceback of the exception.
- result() ResultType [source]¶
Returns the result of the Future, which makes the call synchronous if the result has not been computed yet.
- Returns:
Result of the Future.
- done() bool [source]¶
Determines whether the result is computed or not.
- Returns:
True
if the result is computed,False
otherwise.
- running() bool [source]¶
Determines whether the asynchronous call, the computation is still running or not.
- Returns:
True
if the result is being computed,False
otherwise.
- exception() Optional[Exception] [source]¶
Returns the exceptional result, if any.
- Returns:
Exceptional result of this Future.
- continue_with(continuation_func: Callable[[...], Any], *args: Any) Future [source]¶
Create a continuation that executes when the Future is completed.
- Parameters:
continuation_func – A function which takes the Future as the only parameter. Return value of the function will be set as the result of the continuation future. If the return value of the function is another Future, it will be chained to the returned Future.
*args – Arguments to be passed into
continuation_function
.
- Returns:
A new Future which will be completed when the continuation is done.
- combine_futures(futures: Sequence[Future]) Future [source]¶
Combines set of Futures.
It waits for the completion of the all input Futures regardless of their output.
The returned Future completes with the list of the results of the input Futures, respecting the input order.
If one of the input Futures completes exceptionally, the returned Future also completes exceptionally. In case of multiple exceptional completions, the returned Future will be completed with the first exceptional result.
- Parameters:
futures – List of Futures to be combined.
- Returns:
Result of the combination.
Lifecycle¶
- class LifecycleState[source]¶
Bases:
object
Lifecycle states.
- STARTING = 'STARTING'¶
The client is starting.
- STARTED = 'STARTED'¶
The client has started.
- CONNECTED = 'CONNECTED'¶
The client connected to a member.
- SHUTTING_DOWN = 'SHUTTING_DOWN'¶
The client is shutting down.
- DISCONNECTED = 'DISCONNECTED'¶
The client disconnected from a member.
- SHUTDOWN = 'SHUTDOWN'¶
The client has shutdown.
- class LifecycleService(internal_lifecycle_service)[source]¶
Bases:
object
Lifecycle service for the Hazelcast client. Allows to determine state of the client and add or remove lifecycle listeners.
- is_running() bool [source]¶
Checks whether or not the instance is running.
- Returns:
True
if the client is active and running,False
otherwise.
Partition¶
- class PartitionService(internal_partition_service, serialization_service, send_schema_and_retry_fn)[source]¶
Bases:
object
Allows retrieving information about the partition count, the partition owner or the partition id of a key.
- get_partition_owner(partition_id: int) Optional[UUID] [source]¶
Returns the owner of the partition if it’s set,
None
otherwise.- Parameters:
partition_id – The partition id.
- Returns:
Owner of the partition
Predicate¶
- class Predicate[source]¶
Bases:
object
Represents a map entry predicate. Implementations of this class are basic building blocks for performing queries on map entries.
Special Attributes
The predicates that accept an attribute name support two special attributes:
__key
- instructs the predicate to act on the key associated with an item.this
- instructs the predicate to act on the value associated with an item.
Attribute Paths
Dot notation may be used for attribute name to instruct the predicate to act on the attribute located at deeper level of an item. Given
"full_name.first_name"
path the predicate will act onfirst_name
attribute of the value fetched byfull_name
attribute from the item itself. If any of the attributes along the path can’t be resolved,IllegalArgumentError
will be thrown. Reading of any attribute fromNone
will produceNone
value.Square brackets notation may be used to instruct the predicate to act on the list element at the specified index. Given
"names[0]"
path the predicate will act on the first item of the list fetched bynames
attribute from the item. The index must be non-negative, otherwiseIllegalArgumentError
will be thrown. Reading from the index pointing beyond the end of the list will produceNone
value.Special
any
keyword may be used to act on every list element. Given"names[any].full_name.first_name"
path the predicate will act onfirst_name
attribute of the value fetched byfull_name
attribute from every list element stored in the item itself undernames
attribute.Handling of None
The predicates that accept
None
as a value to compare with or a pattern to match against if and only if that is explicitly stated in the method documentation. In this case, the usual equality logic applies: ifNone
is provided, the predicate passes an item if and only if the value stored under the item attribute in question is alsoNone
.Special care must be taken while comparing with
None
values stored inside items being filtered through the predicates created by the following methods:greater()
,greater_or_equal()
,less()
,less_or_equal()
,between()
. They always evaluate toFalse
and therefore never pass such items.Implicit Type Conversion
If the type of the stored value doesn’t match the type of the value provided to the predicate, implicit type conversion is performed before predicate evaluation. The provided value is converted to match the type of the stored attribute value. If no conversion matching the type exists,
IllegalArgumentError
is thrown.
- class PagingPredicate[source]¶
Bases:
Predicate
This class is a special Predicate which helps to get a page-by-page result of a query.
It can be constructed with a page-size, an inner predicate for filtering, and a comparator for sorting. This class is not thread-safe and stateless. To be able to reuse for another query, one should call
reset()
.- next_page() int [source]¶
Sets page index to next page.
If new index is out of range, the query results that this paging predicate will retrieve will be an empty list.
- Returns:
Updated page index
- previous_page() int [source]¶
Sets page index to previous page.
If current page index is 0, this method does nothing.
- Returns:
Updated page index.
- property page: int¶
The current page index.
- Getter:
Returns the current page index.
- Setter:
Sets the current page index. If the page is out of range, the query results that this paging predicate will retrieve will be an empty list. New page index must be greater than or equal to
0
.
- property page_size: int¶
The page size.
- Getter:
Returns the page size.
- sql(expression: str) Predicate [source]¶
Creates a predicate that will pass items that match the given SQL
where
expression.The following operators are supported:
=
,<
,>
,<=
,>=
==
,!=
,<>
,BETWEEN
,IN
,LIKE
,ILIKE
,REGEX
AND
,OR
,NOT
.The operators are case-insensitive, but attribute names are case sensitive.
Example:
active AND (age > 20 OR salary < 60000)
Differences to standard SQL:
We don’t use ternary boolean logic.
field=10
evaluates tofalse
, iffield
isnull
, in standard SQL it evaluates toUNKNOWN
.IS [NOT] NULL
is not supported, use=NULL
or<>NULL
.IS [NOT] DISTINCT FROM
is not supported, but=
and<>
behave like it.
- Parameters:
expression – The
where
expression.- Returns:
The created sql predicate instance.
- equal(attribute: str, value: Any) Predicate [source]¶
Creates a predicate that will pass items if the given
value
and the value stored under the given itemattribute
are equal.- Parameters:
attribute – The attribute to fetch the value for comparison from.
value – The value to compare the attribute value against. Can be
None
.
- Returns:
The created equal predicate instance.
- not_equal(attribute: str, value: Any) Predicate [source]¶
Creates a predicate that will pass items if the given
value
and the value stored under the given itemattribute
are not equal.- Parameters:
attribute – The attribute to fetch the value for comparison from.
value – The value to compare the attribute value against. Can be
None
.
- Returns:
The created not equal predicate instance.
- like(attribute: str, pattern: Optional[str]) Predicate [source]¶
Creates a predicate that will pass items if the given
pattern
matches the value stored under the given itemattribute
.- Parameters:
attribute – The attribute to fetch the value for matching from.
pattern – The pattern to match the attribute value against. The
%
(percentage sign) is a placeholder for multiple characters, the_
(underscore) is a placeholder for a single character. If you need to match the percentage sign or the underscore character itself, escape it with the backslash, for example"\%"
string will match the percentage sign. Can beNone
.
- Returns:
The created like predicate instance.
- ilike(attribute: str, pattern: Optional[str]) Predicate [source]¶
Creates a predicate that will pass items if the given
pattern
matches the value stored under the given itemattribute
in a case-insensitive manner.- Parameters:
attribute – The attribute to fetch the value for matching from.
pattern – The pattern to match the attribute value against. The
%
(percentage sign) is a placeholder for multiple characters, the_
(underscore) is a placeholder for a single character. If you need to match the percentage sign or the underscore character itself, escape it with the backslash, for example"\%"
string will match the percentage sign. Can beNone
.
- Returns:
The created case-insensitive like predicate instance.
- regex(attribute: str, pattern: Optional[str]) Predicate [source]¶
Creates a predicate that will pass items if the given
pattern
matches the value stored under the given itemattribute
.- Parameters:
attribute – The attribute to fetch the value for matching from.
pattern – The pattern to match the attribute value against. The pattern interpreted exactly the same as described in https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html. Can be
None
.
- Returns:
The created regex predicate instance.
- and_(*predicates: Predicate) Predicate [source]¶
Creates a predicate that will perform the logical
and
operation on the given predicates.If no predicate is provided as argument, the created predicate will always evaluate to
true
and will pass any item.- Parameters:
*predicates – The child predicates to form the resulting
and
predicate from.- Returns:
The created and predicate instance.
- or_(*predicates: Predicate) Predicate [source]¶
Creates a predicate that will perform the logical
or
operation on the given predicates.If no predicate is provided as argument, the created predicate will always evaluate to
false
and will never pass any items.- Parameters:
*predicates – The child predicates to form the resulting
or
predicate from.- Returns:
The created or predicate instance.
- not_(predicate: Predicate) Predicate [source]¶
Creates a predicate that will negate the result of the given
predicate
.- Parameters:
predicate – The predicate to negate the value of.
- Returns:
The created not predicate instance.
- between(attribute: str, from_: Any, to: Any) Predicate [source]¶
Creates a predicate that will pass items if the value stored under the given item
attribute
is contained inside the given range.The range begins at the given
from_
bound and ends at the givento
bound. The bounds are inclusive.- Parameters:
attribute – The attribute to fetch the value to check from.
from – The inclusive lower bound of the range to check.
to – The inclusive upper bound of the range to check.
- Returns:
The created between predicate.
- in_(attribute: str, *values: Any) Predicate [source]¶
Creates a predicate that will pass items if the value stored under the given item
attribute
is a member of the givenvalues
.- Parameters:
attribute – The attribute to fetch the value to test from.
*values – The values set to test the membership in. Individual values can be
None
.
- Returns:
The created in predicate.
- instance_of(class_name: str) Predicate [source]¶
Creates a predicate that will pass entries for which the value class is an instance of the given
class_name
.- Parameters:
class_name – The name of class the created predicate will check for.
- Returns:
The created instance of predicate.
- false() Predicate [source]¶
Creates a predicate that will filter out all items.
- Returns:
The created false predicate.
- true() Predicate [source]¶
Creates a predicate that will pass all items.
- Returns:
The created true predicate.
- paging(predicate: Predicate, page_size: int, comparator: Optional[Any] = None) PagingPredicate [source]¶
Creates a paging predicate with an inner predicate, page size and comparator. Results will be filtered via inner predicate and will be ordered via comparator if provided.
- Parameters:
predicate – The inner predicate through which results will be filtered. Can be
None
. In that case, results will not be filtered.page_size – The page size.
comparator – The comparator through which results will be ordered. The comparision logic must be defined on the server side. Can be
None
. In that case, the results will be returned in natural order.
- Returns:
The created paging predicate.
- greater(attribute: str, value: Any) Predicate [source]¶
Creates a predicate that will pass items if the value stored under the given item
attribute
is greater than the givenvalue
.- Parameters:
attribute – The left-hand side attribute to fetch the value for comparison from.
value – The right-hand side value to compare the attribute value against.
- Returns:
The created greater than predicate.
- greater_or_equal(attribute: str, value: Any) Predicate [source]¶
Creates a predicate that will pass items if the value stored under the given item
attribute
is greater than or equal to the givenvalue
.- Parameters:
attribute – the left-hand side attribute to fetch the value for comparison from.
value – The right-hand side value to compare the attribute value against.
- Returns:
The created greater than or equal to predicate.
- less(attribute: str, value: Any) Predicate [source]¶
Creates a predicate that will pass items if the value stored under the given item
attribute
is less than the givenvalue
.- Parameters:
attribute – The left-hand side attribute to fetch the value for comparison from.
value – The right-hand side value to compare the attribute value against.
- Returns:
The created less than predicate.
- less_or_equal(attribute: str, value: Any) Predicate [source]¶
Creates a predicate that will pass items if the value stored under the given item
attribute
is less than or equal to the givenvalue
.- Parameters:
attribute – The left-hand side attribute to fetch the value for comparison from.
value – The right-hand side value to compare the attribute value against.
- Returns:
The created less than or equal to predicate.
Projection¶
- class Projection(*args, **kwds)[source]¶
Bases:
Generic
[ProjectionType
]Marker base class for all projections.
Projections allow the client to transform (strip down) each query result object in order to avoid redundant network traffic.
- single_attribute(attribute_path: str) Projection[ProjectionType] [source]¶
Creates a projection that extracts the value of the given attribute path.
- Parameters:
attribute_path – Path to extract the attribute from.
- Returns:
A projection that extracts the value of the given attribute path.
- multi_attribute(*attribute_paths: str) Projection[List[Any]] [source]¶
Creates a projection that extracts the values of one or more attribute paths.
- Parameters:
*attribute_paths – Paths to extract the attributes from.
- Returns:
A projection that extracts the values of the given attribute paths.
- identity() Projection[MapEntry[KeyType, ValueType]] [source]¶
Creates a projection that does no transformation.
- Returns:
A projection that does no transformation.
Hazelcast Proxies¶
Base¶
- class Proxy(service_name: str, name: str, context)[source]¶
Bases:
Generic
[BlockingProxyType
],ABC
Provides basic functionality for Hazelcast Proxies.
- class PartitionSpecificProxy(service_name, name, context)[source]¶
Bases:
Proxy
[BlockingProxyType
],ABC
Provides basic functionality for Partition Specific Proxies.
- class TransactionalProxy(name, transaction, context)[source]¶
Bases:
object
Provides an interface for all transactional distributed objects.
- class ItemEventType[source]¶
Bases:
object
Type of item events.
- ADDED = 1¶
Fired when an item is added.
- REMOVED = 2¶
Fired when an item is removed.
- class EntryEventType[source]¶
Bases:
object
Type of entry event.
- ADDED = 1¶
Fired if an entry is added.
- REMOVED = 2¶
Fired if an entry is removed.
- UPDATED = 4¶
Fired if an entry is updated.
- EVICTED = 8¶
Fired if an entry is evicted.
- EXPIRED = 16¶
Fired if an entry is expired.
- EVICT_ALL = 32¶
Fired if all entries are evicted.
- CLEAR_ALL = 64¶
Fired if all entries are cleared.
- MERGED = 128¶
Fired if an entry is merged after a network partition.
- INVALIDATION = 256¶
Fired if an entry is invalidated.
- LOADED = 512¶
Fired if an entry is loaded.
- class ItemEvent(name: str, item: ItemEventType, event_type: int, member: MemberInfo)[source]¶
Bases:
Generic
[ItemType
]Map Item event.
- name¶
Name of the proxy that fired the event.
- item¶
The item related to the event.
- event_type¶
Type of the event.
- member¶
Member that fired the event.
- class EntryEvent(key: KeyType, value: ValueType, old_value: ValueType, merging_value: ValueType, event_type: int, member_uuid: UUID, number_of_affected_entries: int)[source]¶
Bases:
Generic
[KeyType
,ValueType
]Map Entry event.
- event_type¶
Type of the event.
- uuid¶
UUID of the member that fired the event.
- number_of_affected_entries¶
Number of affected entries by this event.
- key¶
The key of this entry event.
- value¶
The value of the entry event.
- old_value¶
The old value of the entry event.
- merging_value¶
The incoming merging value of the entry event.
- class TopicMessage(name: str, message: MessageType, publish_time: int, member: MemberInfo)[source]¶
Bases:
Generic
[MessageType
]Topic message.
- name¶
Name of the proxy that fired the event.
- message¶
The message sent to Topic.
- publish_time¶
UNIX time that the event is published as seconds.
- member¶
Member that fired the event.
CP Proxies¶
AtomicLong¶
- class AtomicLong(context, group_id, service_name, proxy_name, object_name)[source]¶
Bases:
BaseCPProxy
[BlockingAtomicLong
]AtomicLong is a redundant and highly available distributed counter for 64-bit integers (
long
type in Java).It works on top of the Raft consensus algorithm. It offers linearizability during crash failures and network partitions. It is CP with respect to the CAP principle. If a network partition occurs, it remains available on at most one side of the partition.
AtomicLong implementation does not offer exactly-once / effectively-once execution semantics. It goes with at-least-once execution semantics by default and can cause an API call to be committed multiple times in case of CP member failures. It can be tuned to offer at-most-once execution semantics. Please see fail-on-indeterminate-operation-state server-side setting.
- add_and_get(delta: int) Future[int] [source]¶
Atomically adds the given value to the current value.
- Parameters:
delta – The value to add to the current value.
- Returns:
The updated value, the given value added to the current value.
- compare_and_set(expect: int, update: int) Future[bool] [source]¶
Atomically sets the value to the given updated value only if the current value equals the expected value.
- Parameters:
expect – The expected value.
update – The new value.
- Returns:
True
if successful; orFalse
if the actual value was not equal to the expected value.
- decrement_and_get() Future[int] [source]¶
Atomically decrements the current value by one.
- Returns:
The updated value, the current value decremented by one.
- get_and_decrement() Future[int] [source]¶
Atomically decrements the current value by one.
- Returns:
The old value.
- get_and_add(delta: int) Future[int] [source]¶
Atomically adds the given value to the current value.
- Parameters:
delta – The value to add to the current value.
- Returns:
The old value before the add.
- get_and_set(new_value: int) Future[int] [source]¶
Atomically sets the given value and returns the old value.
- Parameters:
new_value – The new value.
- Returns:
The old value.
- increment_and_get() Future[int] [source]¶
Atomically increments the current value by one.
- Returns:
The updated value, the current value incremented by one.
- get_and_increment() Future[int] [source]¶
Atomically increments the current value by one.
- Returns:
The old value.
- set(new_value: int) Future[None] [source]¶
Atomically sets the given value.
- Parameters:
new_value – The new value
- alter(function: Any) Future[None] [source]¶
Alters the currently stored value by applying a function on it.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored value.
- alter_and_get(function: Any) Future[int] [source]¶
Alters the currently stored value by applying a function on it and gets the result.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored value.
- Returns:
The new value.
- get_and_alter(function: Any) Future[int] [source]¶
Alters the currently stored value by applying a function on it and gets the old value.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored value.
- Returns:
The old value.
- apply(function: Any) Future[Any] [source]¶
Applies a function on the value, the actual stored value will not change.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function applied to the currently stored value.
- Returns:
The result of the function application.
- blocking() BlockingAtomicLong [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingAtomicLong(wrapped: AtomicLong)[source]¶
Bases:
AtomicLong
- add_and_get(delta: int) int [source]¶
Atomically adds the given value to the current value.
- Parameters:
delta – The value to add to the current value.
- Returns:
The updated value, the given value added to the current value.
- compare_and_set(expect: int, update: int) bool [source]¶
Atomically sets the value to the given updated value only if the current value equals the expected value.
- Parameters:
expect – The expected value.
update – The new value.
- Returns:
True
if successful; orFalse
if the actual value was not equal to the expected value.
- decrement_and_get() int [source]¶
Atomically decrements the current value by one.
- Returns:
The updated value, the current value decremented by one.
- get_and_decrement() int [source]¶
Atomically decrements the current value by one.
- Returns:
The old value.
- get_and_add(delta: int) int [source]¶
Atomically adds the given value to the current value.
- Parameters:
delta – The value to add to the current value.
- Returns:
The old value before the add.
- get_and_set(new_value: int) int [source]¶
Atomically sets the given value and returns the old value.
- Parameters:
new_value – The new value.
- Returns:
The old value.
- increment_and_get() int [source]¶
Atomically increments the current value by one.
- Returns:
The updated value, the current value incremented by one.
- get_and_increment() int [source]¶
Atomically increments the current value by one.
- Returns:
The old value.
- set(new_value: int) None [source]¶
Atomically sets the given value.
- Parameters:
new_value – The new value
- alter(function: Any) None [source]¶
Alters the currently stored value by applying a function on it.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored value.
- alter_and_get(function: Any) int [source]¶
Alters the currently stored value by applying a function on it and gets the result.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored value.
- Returns:
The new value.
- get_and_alter(function: Any) int [source]¶
Alters the currently stored value by applying a function on it and gets the old value.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored value.
- Returns:
The old value.
- apply(function: Any) Any [source]¶
Applies a function on the value, the actual stored value will not change.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function applied to the currently stored value.
- Returns:
The result of the function application.
- blocking() BlockingAtomicLong [source]¶
Returns a version of this proxy with only blocking method calls.
AtomicReference¶
- class AtomicReference(context, group_id, service_name, proxy_name, object_name)[source]¶
Bases:
BaseCPProxy
[BlockingAtomicReference
],Generic
[ElementType
]A distributed, highly available object reference with atomic operations.
AtomicReference offers linearizability during crash failures and network partitions. It is CP with respect to the CAP principle. If a network partition occurs, it remains available on at most one side of the partition.
The following are some considerations you need to know when you use AtomicReference:
AtomicReference works based on the byte-content and not on the object-reference. If you use the
compare_and_set()
method, do not change the original value because its serialized content will then be different.All methods returning an object return a private copy. You can modify the private copy, but the rest of the world is shielded from your changes. If you want these changes to be visible to the rest of the world, you need to write the change back to the AtomicReference; but be careful about introducing a data-race.
The in-memory format of an AtomicReference is
binary
. The receiving side does not need to have the class definition available unless it needs to be deserialized on the other side., e.g., because a method like alter() is executed. This deserialization is done for every call that needs to have the object instead of the binary content, so be careful with expensive object graphs that need to be deserialized.If you have an object with many fields or an object graph, and you only need to calculate some information or need a subset of fields, you can use the apply() method. With the apply() method, the whole object does not need to be sent over the line; only the information that is relevant is sent.
IAtomicReference does not offer exactly-once / effectively-once execution semantics. It goes with at-least-once execution semantics by default and can cause an API call to be committed multiple times in case of CP member failures. It can be tuned to offer at-most-once execution semantics. Please see fail-on-indeterminate-operation-state server-side setting.
- compare_and_set(expect: Optional[ElementType], update: Optional[ElementType]) Future[bool] [source]¶
Atomically sets the value to the given updated value only if the current value is equal to the expected value.
- Parameters:
expect – The expected value.
update – The new value.
- Returns:
True
if successful, orFalse
if the actual value was not equal to the expected value.
- set(new_value: Optional[ElementType]) Future[None] [source]¶
Atomically sets the given value.
- Parameters:
new_value – The new value.
- get_and_set(new_value: Optional[ElementType]) Future[Optional[ElementType]] [source]¶
Gets the old value and sets the new value.
- Parameters:
new_value – The new value.
- Returns:
The old value.
- is_none() Future[bool] [source]¶
Checks if the stored reference is
None
.- Returns:
True
if the stored reference isNone
,False
otherwise.
- contains(value: Optional[ElementType]) Future[bool] [source]¶
Checks if the reference contains the value.
- Parameters:
value – The value to check (is allowed to be
None
).- Returns:
True
if the value is found,False
otherwise.
- alter(function: Any) Future[None] [source]¶
Alters the currently stored reference by applying a function on it.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored reference.
- alter_and_get(function: Any) Future[Optional[ElementType]] [source]¶
Alters the currently stored reference by applying a function on it and gets the result.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored reference.
- Returns:
The new value, the result of the applied function.
- get_and_alter(function: Any) Future[Optional[ElementType]] [source]¶
Alters the currently stored reference by applying a function on it on and gets the old value.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored reference.
- Returns:
The old value, the value before the function is applied.
- apply(function: Any) Future[Optional[ElementType]] [source]¶
Applies a function on the value, the actual stored value will not change.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function applied on the currently stored reference.
- Returns:
The result of the function application.
- blocking() BlockingAtomicReference[ElementType] [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingAtomicReference(wrapped: AtomicReference[ElementType])[source]¶
Bases:
AtomicReference
[ElementType
]- compare_and_set(expect: Optional[ElementType], update: Optional[ElementType]) bool [source]¶
Atomically sets the value to the given updated value only if the current value is equal to the expected value.
- Parameters:
expect – The expected value.
update – The new value.
- Returns:
True
if successful, orFalse
if the actual value was not equal to the expected value.
- set(new_value: Optional[ElementType]) None [source]¶
Atomically sets the given value.
- Parameters:
new_value – The new value.
- get_and_set(new_value: Optional[ElementType]) Optional[ElementType] [source]¶
Gets the old value and sets the new value.
- Parameters:
new_value – The new value.
- Returns:
The old value.
- is_none() bool [source]¶
Checks if the stored reference is
None
.- Returns:
True
if the stored reference isNone
,False
otherwise.
- contains(value: Optional[ElementType]) bool [source]¶
Checks if the reference contains the value.
- Parameters:
value – The value to check (is allowed to be
None
).- Returns:
True
if the value is found,False
otherwise.
- alter(function: Any) None [source]¶
Alters the currently stored reference by applying a function on it.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored reference.
- alter_and_get(function: Any) Optional[ElementType] [source]¶
Alters the currently stored reference by applying a function on it and gets the result.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored reference.
- Returns:
The new value, the result of the applied function.
- get_and_alter(function: Any) Optional[ElementType] [source]¶
Alters the currently stored reference by applying a function on it on and gets the old value.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function that alters the currently stored reference.
- Returns:
The old value, the value before the function is applied.
- apply(function: Any) Optional[ElementType] [source]¶
Applies a function on the value, the actual stored value will not change.
Notes
function
must be an instance of Hazelcast serializable type. It must have a counterpart registered in the server-side that implements thecom.hazelcast.core.IFunction
interface with the actual logic of the function to be applied.- Parameters:
function – The function applied on the currently stored reference.
- Returns:
The result of the function application.
- blocking() BlockingAtomicReference[ElementType] [source]¶
Returns a version of this proxy with only blocking method calls.
CountDownLatch¶
- class CountDownLatch(context, group_id, service_name, proxy_name, object_name)[source]¶
Bases:
BaseCPProxy
[BlockingCountDownLatch
]A distributed, concurrent countdown latch data structure.
CountDownLatch is a cluster-wide synchronization aid that allows one or more callers to wait until a set of operations being performed in other callers completes.
CountDownLatch count can be reset using
try_set_count()
method after a countdown has finished but not during an active count. This allows the same latch instance to be reused.There is no
await_latch()
method to wait indefinitely since this is undesirable in a distributed application: for example, a cluster can split or the master and replicas could all terminate. In most cases, it is best to configure an explicit timeout, so you have the ability to deal with these situations.All the API methods in the CountDownLatch offer the exactly-once execution semantics. For instance, even if a
count_down()
call is internally retried because of crashed Hazelcast member, the counter value is decremented only once.- await_latch(timeout: float) Future[bool] [source]¶
Causes the current thread to wait until the latch has counted down to zero, or an exception is thrown, or the specified waiting time elapses.
If the current count is zero then this method returns
True
.If the current count is greater than zero, then the current thread becomes disabled for thread scheduling purposes and lies dormant until one of the following things happen:
The count reaches zero due to invocations of the
count_down()
methodThis CountDownLatch instance is destroyed
The countdown owner becomes disconnected
The specified waiting time elapses
If the count reaches zero, then the method returns with the value
True
.If the specified waiting time elapses then the value
False
is returned. If the time is less than or equal to zero, the method will not wait at all.- Parameters:
timeout – The maximum time to wait in seconds
- Returns:
True
if the count reached zero,False
if the waiting time elapsed before the count reached zero- Raises:
IllegalStateError – If the Hazelcast instance was shut down while waiting.
- count_down() Future[None] [source]¶
Decrements the count of the latch, releasing all waiting threads if the count reaches zero.
If the current count is greater than zero, then it is decremented. If the new count is zero:
All waiting threads are re-enabled for thread scheduling purposes
Countdown owner is set to
None
.
If the current count equals zero, then nothing happens.
- try_set_count(count: int) Future[bool] [source]¶
Sets the count to the given value if the current count is zero.
If count is not zero, then this method does nothing and returns
False
.- Parameters:
count – The number of times
count_down()
must be invoked before callers can pass throughawait_latch()
.- Returns:
True
if the new count was set,False
if the current count is not zero.
- blocking() BlockingCountDownLatch [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingCountDownLatch(wrapped: CountDownLatch)[source]¶
Bases:
CountDownLatch
- await_latch(timeout: float) bool [source]¶
Causes the current thread to wait until the latch has counted down to zero, or an exception is thrown, or the specified waiting time elapses.
If the current count is zero then this method returns
True
.If the current count is greater than zero, then the current thread becomes disabled for thread scheduling purposes and lies dormant until one of the following things happen:
The count reaches zero due to invocations of the
count_down()
methodThis CountDownLatch instance is destroyed
The countdown owner becomes disconnected
The specified waiting time elapses
If the count reaches zero, then the method returns with the value
True
.If the specified waiting time elapses then the value
False
is returned. If the time is less than or equal to zero, the method will not wait at all.- Parameters:
timeout – The maximum time to wait in seconds
- Returns:
True
if the count reached zero,False
if the waiting time elapsed before the count reached zero- Raises:
IllegalStateError – If the Hazelcast instance was shut down while waiting.
- count_down() None [source]¶
Decrements the count of the latch, releasing all waiting threads if the count reaches zero.
If the current count is greater than zero, then it is decremented. If the new count is zero:
All waiting threads are re-enabled for thread scheduling purposes
Countdown owner is set to
None
.
If the current count equals zero, then nothing happens.
- try_set_count(count: int) bool [source]¶
Sets the count to the given value if the current count is zero.
If count is not zero, then this method does nothing and returns
False
.- Parameters:
count – The number of times
count_down()
must be invoked before callers can pass throughawait_latch()
.- Returns:
True
if the new count was set,False
if the current count is not zero.
- blocking() BlockingCountDownLatch [source]¶
Returns a version of this proxy with only blocking method calls.
FencedLock¶
- class FencedLock(context, group_id, service_name, proxy_name, object_name)[source]¶
Bases:
SessionAwareCPProxy
[BlockingFencedLock
]A linearizable, distributed lock.
FencedLock is CP with respect to the CAP principle. It works on top of the Raft consensus algorithm. It offers linearizability during crash-stop failures and network partitions. If a network partition occurs, it remains available on at most one side of the partition.
FencedLock works on top of CP sessions. Please refer to CP Session documentation section for more information.
By default, FencedLock is reentrant. Once a caller acquires the lock, it can acquire the lock reentrantly as many times as it wants in a linearizable manner. You can configure the reentrancy behaviour on the member side. For instance, reentrancy can be disabled and FencedLock can work as a non-reentrant mutex. One can also set a custom reentrancy limit. When the reentrancy limit is reached, FencedLock does not block a lock call. Instead, it fails with
LockAcquireLimitReachedError
or a specified return value. Please check the locking methods to see details about the behaviour.It is advised to use this proxy in a blocking mode. Although it is possible, non-blocking usage requires an extra care. FencedLock uses the id of the thread that makes the request to distinguish lock owners. When used in a non-blocking mode, added callbacks or continuations are not generally executed in the thread that makes the request. That causes the code below to fail most of the time since the lock is acquired on the main thread but, unlock request is done in another thread.
lock = client.cp_subsystem.get_lock("lock") def cb(_): lock.unlock() lock.lock().add_done_callback(cb)
- INVALID_FENCE = 0¶
- lock() Future[int] [source]¶
Acquires the lock and returns the fencing token assigned to the current thread.
If the lock is acquired reentrantly, the same fencing token is returned, or the
lock()
call can fail withLockAcquireLimitReachedError
if the lock acquire limit is already reached.If the lock is not available then the current thread becomes disabled for thread scheduling purposes and lies dormant until the lock has been acquired.
Fencing tokens are monotonic numbers that are incremented each time the lock switches from the free state to the acquired state. They are simply used for ordering lock holders. A lock holder can pass its fencing to the shared resource to fence off previous lock holders. When this resource receives an operation, it can validate the fencing token in the operation.
Consider the following scenario where the lock is free initially
lock = client.cp_subsystem.get_lock("lock").blocking() fence1 = lock.lock() # (1) fence2 = lock.lock() # (2) assert fence1 == fence2 lock.unlock() lock.unlock() fence3 = lock.lock() # (3) assert fence3 > fence1
In this scenario, the lock is acquired by a thread in the cluster. Then, the same thread reentrantly acquires the lock again. The fencing token returned from the second acquire is equal to the one returned from the first acquire, because of reentrancy. After the second acquire, the lock is released 2 times, hence becomes free. There is a third lock acquire here, which returns a new fencing token. Because this last lock acquire is not reentrant, its fencing token is guaranteed to be larger than the previous tokens, independent of the thread that has acquired the lock.
- Returns:
The fencing token.
- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
LockAcquireLimitReachedError – If the lock call is reentrant and the configured lock acquire limit is already reached.
- try_lock(timeout: float = 0) Future[int] [source]¶
Acquires the lock if it is free within the given waiting time, or already held by the current thread at the time of invocation and, the acquire limit is not exceeded, and returns the fencing token assigned to the current thread.
If the lock is acquired reentrantly, the same fencing token is returned. If the lock acquire limit is exceeded, then this method immediately returns
INVALID_FENCE
that represents a failed lock attempt.If the lock is not available then the current thread becomes disabled for thread scheduling purposes and lies dormant until the lock is acquired by the current thread or the specified waiting time elapses.
If the specified waiting time elapses, then
INVALID_FENCE
is returned. If the time is less than or equal to zero, the method does not wait at all. By default, timeout is set to zero.A typical usage idiom for this method would be
lock = client.cp_subsystem.get_lock("lock").blocking() fence = lock.try_lock() if fence != lock.INVALID_FENCE: try: # manipulate the protected state finally: lock.unlock() else: # perform another action
This usage ensures that the lock is unlocked if it was acquired, and doesn’t try to unlock if the lock was not acquired.
See also
lock()
function for more information about fences.- Parameters:
timeout – The maximum time to wait for the lock in seconds.
- Returns:
The fencing token if the lock was acquired and
INVALID_FENCE
otherwise.- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
- unlock() Future[None] [source]¶
Releases the lock if the lock is currently held by the current thread.
- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
IllegalMonitorStateError – If the lock is not held by the current thread
- is_locked() Future[bool] [source]¶
Returns whether this lock is locked or not.
- Returns:
True
if this lock is locked by any thread in the cluster,False
otherwise.- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
- is_locked_by_current_thread() Future[bool] [source]¶
Returns whether the lock is held by the current thread or not.
- Returns:
True
if the lock is held by the current thread,False
otherwise.- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
- get_lock_count() Future[int] [source]¶
Returns the reentrant lock count if the lock is held by any thread in the cluster.
- Returns:
The reentrant lock count if the lock is held by any thread in the cluster.
- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
- blocking() BlockingFencedLock [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingFencedLock(wrapped: FencedLock)[source]¶
Bases:
FencedLock
- lock() int [source]¶
Acquires the lock and returns the fencing token assigned to the current thread.
If the lock is acquired reentrantly, the same fencing token is returned, or the
lock()
call can fail withLockAcquireLimitReachedError
if the lock acquire limit is already reached.If the lock is not available then the current thread becomes disabled for thread scheduling purposes and lies dormant until the lock has been acquired.
Fencing tokens are monotonic numbers that are incremented each time the lock switches from the free state to the acquired state. They are simply used for ordering lock holders. A lock holder can pass its fencing to the shared resource to fence off previous lock holders. When this resource receives an operation, it can validate the fencing token in the operation.
Consider the following scenario where the lock is free initially
lock = client.cp_subsystem.get_lock("lock").blocking() fence1 = lock.lock() # (1) fence2 = lock.lock() # (2) assert fence1 == fence2 lock.unlock() lock.unlock() fence3 = lock.lock() # (3) assert fence3 > fence1
In this scenario, the lock is acquired by a thread in the cluster. Then, the same thread reentrantly acquires the lock again. The fencing token returned from the second acquire is equal to the one returned from the first acquire, because of reentrancy. After the second acquire, the lock is released 2 times, hence becomes free. There is a third lock acquire here, which returns a new fencing token. Because this last lock acquire is not reentrant, its fencing token is guaranteed to be larger than the previous tokens, independent of the thread that has acquired the lock.
- Returns:
The fencing token.
- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
LockAcquireLimitReachedError – If the lock call is reentrant and the configured lock acquire limit is already reached.
- try_lock(timeout: float = 0) int [source]¶
Acquires the lock if it is free within the given waiting time, or already held by the current thread at the time of invocation and, the acquire limit is not exceeded, and returns the fencing token assigned to the current thread.
If the lock is acquired reentrantly, the same fencing token is returned. If the lock acquire limit is exceeded, then this method immediately returns
INVALID_FENCE
that represents a failed lock attempt.If the lock is not available then the current thread becomes disabled for thread scheduling purposes and lies dormant until the lock is acquired by the current thread or the specified waiting time elapses.
If the specified waiting time elapses, then
INVALID_FENCE
is returned. If the time is less than or equal to zero, the method does not wait at all. By default, timeout is set to zero.A typical usage idiom for this method would be
lock = client.cp_subsystem.get_lock("lock").blocking() fence = lock.try_lock() if fence != lock.INVALID_FENCE: try: # manipulate the protected state finally: lock.unlock() else: # perform another action
This usage ensures that the lock is unlocked if it was acquired, and doesn’t try to unlock if the lock was not acquired.
See also
lock()
function for more information about fences.- Parameters:
timeout – The maximum time to wait for the lock in seconds.
- Returns:
The fencing token if the lock was acquired and
INVALID_FENCE
otherwise.- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
- unlock() None [source]¶
Releases the lock if the lock is currently held by the current thread.
- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
IllegalMonitorStateError – If the lock is not held by the current thread
- is_locked() bool [source]¶
Returns whether this lock is locked or not.
- Returns:
True
if this lock is locked by any thread in the cluster,False
otherwise.- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
- is_locked_by_current_thread() bool [source]¶
Returns whether the lock is held by the current thread or not.
- Returns:
True
if the lock is held by the current thread,False
otherwise.- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
- get_lock_count() int [source]¶
Returns the reentrant lock count if the lock is held by any thread in the cluster.
- Returns:
The reentrant lock count if the lock is held by any thread in the cluster.
- Raises:
LockOwnershipLostError – If the underlying CP session was closed before the client releases the lock
- blocking() BlockingFencedLock [source]¶
Returns a version of this proxy with only blocking method calls.
Semaphore¶
- class Semaphore(context, group_id, service_name, proxy_name, object_name)[source]¶
Bases:
BaseCPProxy
[BlockingSemaphore
]A linearizable, distributed semaphore.
Semaphores are often used to restrict the number of callers that can access some physical or logical resource.
Semaphore is a cluster-wide counting semaphore. Conceptually, it maintains a set of permits. Each
acquire()
blocks if necessary until a permit is available, and then takes it. Dually, eachrelease()
adds a permit, potentially releasing a blocking acquirer. However, no actual permit objects are used; the semaphore just keeps a count of the number available and acts accordingly.Hazelcast’s distributed semaphore implementation guarantees that callers invoking any of the
acquire()
methods are selected to obtain permits in the order of their invocations (first-in-first-out; FIFO). Note that FIFO ordering implies the order which the primary replica of an Semaphore receives these acquire requests. Therefore, it is possible for one member to invokeacquire()
before another member, but its request hits the primary replica after the other member.This class also provides convenient ways to work with multiple permits at once. Beware of the increased risk of indefinite postponement when using the multiple-permit acquire. If permits are released one by one, a caller waiting for one permit will acquire it before a caller waiting for multiple permits regardless of the call order.
Correct usage of a semaphore is established by programming convention in the application.
It works on top of the Raft consensus algorithm. It offers linearizability during crash failures and network partitions. It is CP with respect to the CAP principle. If a network partition occurs, it remains available on at most one side of the partition.
It has 2 variations:
The default implementation accessed via
cp_subsystem
is session-aware. In this one, when a caller makes its very firstacquire()
call, it starts a new CP session with the underlying CP group. Then, liveliness of the caller is tracked via this CP session. When the caller fails, permits acquired by this caller are automatically and safely released. However, the session-aware version comes with a limitation, that is, a client cannot release permits before acquiring them first. In other words, a client can release only the permits it has acquired earlier. It means, you can acquire a permit from one thread and release it from another thread using the same Hazelcast client, but not different instances of Hazelcast client. You can use the session-aware CP Semaphore implementation by disabling JDK compatibility viajdk-compatible
server-side setting. Although the session-aware implementation has a minor difference to the JDK Semaphore, we think it is a better fit for distributed environments because of its safe auto-cleanup mechanism for acquired permits.The second implementation offered by
cp_subsystem
is sessionless. This implementation does not perform auto-cleanup of acquired permits on failures. Acquired permits are not bound to threads and permits can be released without acquiring first. However, you need to handle failed permit owners on your own. If a Hazelcast server or a client fails while holding some permits, they will not be automatically released. You can use the sessionless CP Semaphore implementation by enabling JDK compatibility viajdk-compatible
server-side setting.
There is a subtle difference between the lock and semaphore abstractions. A lock can be assigned to at most one endpoint at a time, so we have a total order among its holders. However, permits of a semaphore can be assigned to multiple endpoints at a time, which implies that we may not have a total order among permit holders. In fact, permit holders are partially ordered. For this reason, the fencing token approach, which is explained in
FencedLock
, does not work for the semaphore abstraction. Moreover, each permit is an independent entity. Multiple permit acquires and reentrant lock acquires of a single endpoint are not equivalent. The only case where a semaphore behaves like a lock is the binary case, where the semaphore has only 1 permit. In this case, the semaphore works like a non-reentrant lock.All of the API methods in the new CP Semaphore implementation offer the exactly-once execution semantics for the session-aware version. For instance, even if a
release()
call is internally retried because of a crashed Hazelcast member, the permit is released only once. However, this guarantee is not given for the sessionless, a.k.a, JDK-compatible CP Semaphore.- init(permits: int) Future[bool] [source]¶
Tries to initialize this Semaphore instance with the given permit count.
- Parameters:
permits – The given permit count.
- Returns:
True
if the initialization succeeds,False
if already initialized.- Raises:
AssertionError – If the
permits
is negative.
- acquire(permits: int = 1) Future[None] [source]¶
Acquires the given number of permits if they are available, and returns immediately, reducing the number of available permits by the given amount.
If insufficient permits are available then the result of the returned future is not set until one of the following things happens:
Some other caller invokes one of the
release
methods for this semaphore, the current caller is next to be assigned permits and the number of available permits satisfies this request,This Semaphore instance is destroyed
- Parameters:
permits – Optional number of permits to acquire; defaults to
1
when not specified- Raises:
AssertionError – If the
permits
is not positive.
- available_permits() Future[int] [source]¶
Returns the current number of permits currently available in this semaphore.
This method is typically used for debugging and testing purposes.
- Returns:
The number of permits available in this semaphore.
- drain_permits() Future[int] [source]¶
Acquires and returns all permits that are available at invocation time.
- Returns:
The number of permits drained.
- reduce_permits(reduction: int) Future[None] [source]¶
Reduces the number of available permits by the indicated amount.
This method differs from
acquire
as it does not block until permits become available. Similarly, if the caller has acquired some permits, they are not released with this call.- Parameters:
reduction – The number of permits to reduce.
- Raises:
AssertionError – If the
reduction
is negative.
- increase_permits(increase: int) Future[None] [source]¶
Increases the number of available permits by the indicated amount.
If there are some callers waiting for permits to become available, they will be notified. Moreover, if the caller has acquired some permits, they are not released with this call.
- Parameters:
increase – The number of permits to increase.
- Raises:
AssertionError – If
increase
is negative.
- release(permits: int = 1) Future[None] [source]¶
Releases the given number of permits and increases the number of available permits by that amount.
If some callers in the cluster are blocked for acquiring permits, they will be notified.
If the underlying Semaphore implementation is non-JDK-compatible (configured via
jdk-compatible
server-side setting), then a client can only release a permit which it has acquired before. In other words, a client cannot release a permit without acquiring it first.Otherwise, which means the underlying implementation is JDK compatible (configured via
jdk-compatible
server-side setting), there is no requirement that a client that releases a permit must have acquired that permit by calling one of theacquire()
methods. A client can freely release a permit without acquiring it first. In this case, correct usage of a semaphore is established by programming convention in the application.- Parameters:
permits – Optional number of permits to release; defaults to
1
when not specified.- Raises:
AssertionError – If the
permits
is not positive.IllegalStateError – if the Semaphore is non-JDK-compatible and the caller does not have a permit
- try_acquire(permits: int = 1, timeout: float = 0) Future[bool] [source]¶
Acquires the given number of permits and returns
True
, if they become available during the given waiting time.If permits are acquired, the number of available permits in the Semaphore instance is also reduced by the given amount.
If no sufficient permits are available, then the result of the returned future is not set until one of the following things happens:
Permits are released by other callers, the current caller is next to be assigned permits and the number of available permits satisfies this request
The specified waiting time elapses
- Parameters:
permits – The number of permits to acquire; defaults to
1
when not specified.timeout – Optional timeout in seconds to wait for the permits; when it’s not specified the operation will return immediately after the acquire attempt.
- Returns:
True
if all permits were acquired,False
if the waiting time elapsed before all permits could be acquired- Raises:
AssertionError – If the
permits
is not positive.
Executor¶
- class Executor(service_name: str, name: str, context)[source]¶
Bases:
Proxy
[BlockingExecutor
]An object that executes submitted executable tasks.
- execute_on_key_owner(key: Any, task: Any) Future[Any] [source]¶
Executes a task on the owner of the specified key.
- Parameters:
key – The specified key.
task – A task executed on the owner of the specified key.
- Returns:
The result of the task.
- execute_on_member(member: MemberInfo, task: Any) Future[Any] [source]¶
Executes a task on the specified member.
- Parameters:
member – The specified member.
task – The task executed on the specified member.
- Returns:
The result of the task.
- execute_on_members(members: Sequence[MemberInfo], task: Any) Future[List[Any]] [source]¶
Executes a task on each of the specified members.
- Parameters:
members – The specified members.
task – The task executed on the specified members.
- Returns:
The list of results of the tasks on each member.
- execute_on_all_members(task: Any) Future[List[Any]] [source]¶
Executes a task on all the known cluster members.
- Parameters:
task – The task executed on the all the members.
- Returns:
The list of results of the tasks on each member.
- is_shutdown() Future[bool] [source]¶
Determines whether this executor has been shutdown or not.
- Returns:
True
if the executor has been shutdown,False
otherwise.
- shutdown() Future[None] [source]¶
Initiates a shutdown process which works orderly. Tasks that were submitted before shutdown are executed but new task will not be accepted.
- blocking() BlockingExecutor [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingExecutor(wrapped: Executor)[source]¶
Bases:
Executor
- name¶
- service_name¶
- execute_on_key_owner(key: Any, task: Any) Any [source]¶
Executes a task on the owner of the specified key.
- Parameters:
key – The specified key.
task – A task executed on the owner of the specified key.
- Returns:
The result of the task.
- execute_on_member(member: MemberInfo, task: Any) Any [source]¶
Executes a task on the specified member.
- Parameters:
member – The specified member.
task – The task executed on the specified member.
- Returns:
The result of the task.
- execute_on_members(members: Sequence[MemberInfo], task: Any) List[Any] [source]¶
Executes a task on each of the specified members.
- Parameters:
members – The specified members.
task – The task executed on the specified members.
- Returns:
The list of results of the tasks on each member.
- execute_on_all_members(task: Any) List[Any] [source]¶
Executes a task on all the known cluster members.
- Parameters:
task – The task executed on the all the members.
- Returns:
The list of results of the tasks on each member.
- is_shutdown() bool [source]¶
Determines whether this executor has been shutdown or not.
- Returns:
True
if the executor has been shutdown,False
otherwise.
- shutdown() None [source]¶
Initiates a shutdown process which works orderly. Tasks that were submitted before shutdown are executed but new task will not be accepted.
- blocking() BlockingExecutor [source]¶
Returns a version of this proxy with only blocking method calls.
FlakeIdGenerator¶
- class FlakeIdGenerator(service_name, name, context)[source]¶
Bases:
Proxy
[BlockingFlakeIdGenerator
]A cluster-wide unique ID generator. Generated IDs are int values and are k-ordered (roughly ordered). IDs are in the range from 0 to 2^63 - 1.
The IDs contain a timestamp component and a node ID component, which is assigned when the member joins the cluster. This allows the IDs to be ordered and unique without any coordination between members, which makes the generator safe even in split-brain scenario.
Timestamp component is in milliseconds since 1.1.2018, 0:00 UTC and has 41 bits. This caps the useful lifespan of the generator to little less than 70 years (until ~2088). The sequence component is 6 bits. If more than 64 IDs are requested in single millisecond, IDs will gracefully overflow to the next millisecond and uniqueness is guaranteed in this case. The implementation does not allow overflowing by more than 15 seconds, if IDs are requested at higher rate, the call will block. Note, however, that clients are able to generate even faster because each call goes to a different (random) member and the 64 IDs/ms limit is for single member.
- Node ID overflow:
It is possible to generate IDs on any member or client as long as there is at least one member with join version smaller than 2^16 in the cluster. The remedy is to restart the cluster: nodeId will be assigned from zero again. Uniqueness after the restart will be preserved thanks to the timestamp component.
- new_id() Future[int] [source]¶
Generates and returns a cluster-wide unique ID.
This method goes to a random member and gets a batch of IDs, which will then be returned locally for a limited time. The pre-fetch size and the validity time can be configured.
Note
Values returned from this method may not be strictly ordered.
- Returns:
New cluster-wide unique ID.
- Raises:
HazelcastError – If node ID for all members in the cluster is out of valid range. See
Node ID overflow
note above.
- blocking() BlockingFlakeIdGenerator [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingFlakeIdGenerator(wrapped: FlakeIdGenerator)[source]¶
Bases:
FlakeIdGenerator
- name¶
- service_name¶
- new_id() int [source]¶
Generates and returns a cluster-wide unique ID.
This method goes to a random member and gets a batch of IDs, which will then be returned locally for a limited time. The pre-fetch size and the validity time can be configured.
Note
Values returned from this method may not be strictly ordered.
- Returns:
New cluster-wide unique ID.
- Raises:
HazelcastError – If node ID for all members in the cluster is out of valid range. See
Node ID overflow
note above.
- destroy() bool [source]¶
Destroys this proxy.
- Returns:
True
if this proxy is destroyed successfully,False
otherwise.
- blocking() BlockingFlakeIdGenerator [source]¶
Returns a version of this proxy with only blocking method calls.
List¶
- class List(service_name, name, context)[source]¶
Bases:
PartitionSpecificProxy
[BlockingList
],Generic
[ItemType
]Concurrent, distributed implementation of List.
The Hazelcast List is not a partitioned data-structure. So all the content of the List is stored in a single machine (and in the backup). So the List will not scale by adding more members in the cluster.
- add(item: ItemType) Future[bool] [source]¶
Adds the specified item to the end of this list.
- Parameters:
item – the specified item to be appended to this list.
- Returns:
True
if item is added,False
otherwise.
- add_at(index: int, item: ItemType) Future[None] [source]¶
Adds the specified item at the specific position in this list. Element in this position and following elements are shifted to the right, if any.
- Parameters:
index – The specified index to insert the item.
item – The specified item to be inserted.
- add_all(items: Sequence[ItemType]) Future[bool] [source]¶
Adds all of the items in the specified collection to the end of this list.
The order of new elements is determined by the specified collection’s iterator.
- Parameters:
items – The specified collection which includes the elements to be added to list.
- Returns:
True
if this call changed the list,False
otherwise.
- add_all_at(index: int, items: Sequence[ItemType]) Future[bool] [source]¶
Adds all of the elements in the specified collection into this list at the specified position.
Elements in this positions and following elements are shifted to the right, if any. The order of new elements is determined by the specified collection’s iterator.
- Parameters:
index – The specified index at which the first element of specified collection is added.
items – The specified collection which includes the elements to be added to list.
- Returns:
True
if this call changed the list,False
otherwise.
- add_listener(include_value: bool = False, item_added_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None, item_removed_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None) Future[str] [source]¶
Adds an item listener for this list. Listener will be notified for all list add/remove events.
- Parameters:
include_value – Whether received events include the updated item or not.
item_added_func – To be called when an item is added to this list.
item_removed_func – To be called when an item is deleted from this list.
- Returns:
A registration id which is used as a key to remove the listener.
- contains(item: ItemType) Future[bool] [source]¶
Determines whether this list contains the specified item or not.
- Parameters:
item – The specified item.
- Returns:
True` if the specified item exists in this list,
False
otherwise.
- contains_all(items: Sequence[ItemType]) Future[bool] [source]¶
Determines whether this list contains all of the items in specified collection or not.
- Parameters:
items – The specified collection which includes the items to be searched.
- Returns:
True
if all of the items in specified collection exist in this list,False
otherwise.
- get(index: int) Future[ItemType] [source]¶
Returns the item which is in the specified position in this list.
- Parameters:
index – the specified index of the item to be returned.
- Returns:
The item in the specified position in this list.
- get_all() Future[List[ItemType]] [source]¶
Returns all the items in this list.
- Returns:
All the items in this list.
- iterator() Future[List[ItemType]] [source]¶
Returns an iterator over the elements in this list in proper sequence, same with
get_all
.- Returns:
All the items in this list.
- index_of(item: ItemType) Future[int] [source]¶
Returns the first index of specified item’s occurrences in this list.
If specified item is not present in this list, returns -1.
- Parameters:
item – The specified item to be searched for.
- Returns:
The first index of specified item’s occurrences,
-1
if item is not present in this list.
- is_empty() Future[bool] [source]¶
Determines whether this list is empty or not.
- Returns:
True
if the list contains no elements,False
otherwise.
- last_index_of(item: ItemType) Future[int] [source]¶
Returns the last index of specified item’s occurrences in this list.
If specified item is not present in this list, returns -1.
- Parameters:
item – The specified item to be searched for.
- Returns:
The last index of specified item’s occurrences,
-1
if item is not present in this list.
- list_iterator(index: int = 0) Future[List[ItemType]] [source]¶
Returns a list iterator of the elements in this list.
If an index is provided, iterator starts from this index.
- Parameters:
index – Index of first element to be returned from the list iterator.
- Returns:
List of the elements in this list.
- remove(item: ItemType) Future[bool] [source]¶
Removes the specified element’s first occurrence from the list if it exists in this list.
- Parameters:
item – The specified element.
- Returns:
True
if the specified element is present in this list,False
otherwise.
- remove_at(index: int) Future[ItemType] [source]¶
Removes the item at the specified position in this list.
Element in this position and following elements are shifted to the left, if any.
- Parameters:
index – Index of the item to be removed.
- Returns:
The item previously at the specified index.
- remove_all(items: Sequence[ItemType]) Future[bool] [source]¶
Removes all of the elements that is present in the specified collection from this list.
- Parameters:
items – The specified collection.
- Returns:
True
if this list changed as a result of the call,False
otherwise.
- remove_listener(registration_id: str) Future[bool] [source]¶
Removes the specified item listener.
Returns silently if the specified listener was not added before.
- Parameters:
registration_id – Id of the listener to be deleted.
- Returns:
True
if the item listener is removed,False
otherwise.
- retain_all(items: Sequence[ItemType]) Future[bool] [source]¶
Retains only the items that are contained in the specified collection.
It means, items which are not present in the specified collection are removed from this list.
- Parameters:
items – Collections which includes the elements to be retained in this list.
- Returns:
True
if this list changed as a result of the call,False
otherwise.
- size() Future[int] [source]¶
Returns the number of elements in this list.
- Returns:
Number of elements in this list.
- set_at(index: int, item: ItemType) Future[ItemType] [source]¶
Replaces the specified element with the element at the specified position in this list.
- Parameters:
index – Index of the item to be replaced.
item – Item to be stored.
- Returns:
The previous item in the specified index.
- sub_list(from_index: int, to_index: int) Future[List[ItemType]] [source]¶
Returns a sublist from this list, from from_index(inclusive) to to_index(exclusive).
The returned list is backed by this list, so non-structural changes in the returned list are reflected in this list, and vice-versa.
- Parameters:
from_index – The start point(inclusive) of the sub_list.
to_index – The end point(exclusive) of the sub_list.
- Returns:
A view of the specified range within this list.
- blocking() BlockingList[ItemType] [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingList(wrapped: List[ItemType])[source]¶
Bases:
List
[ItemType
]- name¶
- service_name¶
- add(item: ItemType) bool [source]¶
Adds the specified item to the end of this list.
- Parameters:
item – the specified item to be appended to this list.
- Returns:
True
if item is added,False
otherwise.
- add_at(index: int, item: ItemType) None [source]¶
Adds the specified item at the specific position in this list. Element in this position and following elements are shifted to the right, if any.
- Parameters:
index – The specified index to insert the item.
item – The specified item to be inserted.
- add_all(items: Sequence[ItemType]) bool [source]¶
Adds all of the items in the specified collection to the end of this list.
The order of new elements is determined by the specified collection’s iterator.
- Parameters:
items – The specified collection which includes the elements to be added to list.
- Returns:
True
if this call changed the list,False
otherwise.
- add_all_at(index: int, items: Sequence[ItemType]) bool [source]¶
Adds all of the elements in the specified collection into this list at the specified position.
Elements in this positions and following elements are shifted to the right, if any. The order of new elements is determined by the specified collection’s iterator.
- Parameters:
index – The specified index at which the first element of specified collection is added.
items – The specified collection which includes the elements to be added to list.
- Returns:
True
if this call changed the list,False
otherwise.
- add_listener(include_value: bool = False, item_added_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None, item_removed_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None) str [source]¶
Adds an item listener for this list. Listener will be notified for all list add/remove events.
- Parameters:
include_value – Whether received events include the updated item or not.
item_added_func – To be called when an item is added to this list.
item_removed_func – To be called when an item is deleted from this list.
- Returns:
A registration id which is used as a key to remove the listener.
- contains(item: ItemType) bool [source]¶
Determines whether this list contains the specified item or not.
- Parameters:
item – The specified item.
- Returns:
True` if the specified item exists in this list,
False
otherwise.
- contains_all(items: Sequence[ItemType]) bool [source]¶
Determines whether this list contains all of the items in specified collection or not.
- Parameters:
items – The specified collection which includes the items to be searched.
- Returns:
True
if all of the items in specified collection exist in this list,False
otherwise.
- get(index: int) ItemType [source]¶
Returns the item which is in the specified position in this list.
- Parameters:
index – the specified index of the item to be returned.
- Returns:
The item in the specified position in this list.
- get_all() List[ItemType] [source]¶
Returns all the items in this list.
- Returns:
All the items in this list.
- iterator() List[ItemType] [source]¶
Returns an iterator over the elements in this list in proper sequence, same with
get_all
.- Returns:
All the items in this list.
- index_of(item: ItemType) int [source]¶
Returns the first index of specified item’s occurrences in this list.
If specified item is not present in this list, returns -1.
- Parameters:
item – The specified item to be searched for.
- Returns:
The first index of specified item’s occurrences,
-1
if item is not present in this list.
- is_empty() Future[bool] [source]¶
Determines whether this list is empty or not.
- Returns:
True
if the list contains no elements,False
otherwise.
- last_index_of(item: ItemType) int [source]¶
Returns the last index of specified item’s occurrences in this list.
If specified item is not present in this list, returns -1.
- Parameters:
item – The specified item to be searched for.
- Returns:
The last index of specified item’s occurrences,
-1
if item is not present in this list.
- list_iterator(index: int = 0) List[ItemType] [source]¶
Returns a list iterator of the elements in this list.
If an index is provided, iterator starts from this index.
- Parameters:
index – Index of first element to be returned from the list iterator.
- Returns:
List of the elements in this list.
- remove(item: ItemType) bool [source]¶
Removes the specified element’s first occurrence from the list if it exists in this list.
- Parameters:
item – The specified element.
- Returns:
True
if the specified element is present in this list,False
otherwise.
- remove_at(index: int) ItemType [source]¶
Removes the item at the specified position in this list.
Element in this position and following elements are shifted to the left, if any.
- Parameters:
index – Index of the item to be removed.
- Returns:
The item previously at the specified index.
- remove_all(items: Sequence[ItemType]) bool [source]¶
Removes all of the elements that is present in the specified collection from this list.
- Parameters:
items – The specified collection.
- Returns:
True
if this list changed as a result of the call,False
otherwise.
- remove_listener(registration_id: str) bool [source]¶
Removes the specified item listener.
Returns silently if the specified listener was not added before.
- Parameters:
registration_id – Id of the listener to be deleted.
- Returns:
True
if the item listener is removed,False
otherwise.
- retain_all(items: Sequence[ItemType]) bool [source]¶
Retains only the items that are contained in the specified collection.
It means, items which are not present in the specified collection are removed from this list.
- Parameters:
items – Collections which includes the elements to be retained in this list.
- Returns:
True
if this list changed as a result of the call,False
otherwise.
- size() int [source]¶
Returns the number of elements in this list.
- Returns:
Number of elements in this list.
- set_at(index: int, item: ItemType) ItemType [source]¶
Replaces the specified element with the element at the specified position in this list.
- Parameters:
index – Index of the item to be replaced.
item – Item to be stored.
- Returns:
The previous item in the specified index.
- sub_list(from_index: int, to_index: int) List[ItemType] [source]¶
Returns a sublist from this list, from from_index(inclusive) to to_index(exclusive).
The returned list is backed by this list, so non-structural changes in the returned list are reflected in this list, and vice-versa.
- Parameters:
from_index – The start point(inclusive) of the sub_list.
to_index – The end point(exclusive) of the sub_list.
- Returns:
A view of the specified range within this list.
- blocking() BlockingList[ItemType] [source]¶
Returns a version of this proxy with only blocking method calls.
Map¶
- class Map(service_name, name, context)[source]¶
Bases:
Proxy
[BlockingMap
],Generic
[KeyType
,ValueType
]Hazelcast Map client proxy to access the map on the cluster.
Concurrent, distributed, observable and queryable map. This map can work both async(non-blocking) or sync(blocking). Blocking calls return the value of the call and block the execution until return value is calculated. However, async calls return
Future
and do not block execution. Result of theFuture
can be used whenever ready. AFuture
’s result can be obtained with blocking the execution by callingfuture.result()
.Example
>>> my_map = client.get_map("my_map").blocking() # sync map, all operations are blocking >>> print("map.put", my_map.put("key", "value")) >>> print("map.contains_key", my_map.contains_key("key")) >>> print("map.get", my_map.get("key")) >>> print("map.size", my_map.size())
Example
>>> my_map = client.get_map("map") # async map, all operations are non-blocking >>> def put_callback(f): >>> print("map.put", f.result()) >>> my_map.put("key", "async_val").add_done_callback(put_callback) >>> >>> print("map.size", my_map.size().result()) >>> >>> def contains_key_callback(f): >>> print("map.contains_key", f.result()) >>> my_map.contains_key("key").add_done_callback(contains_key_callback)
This class does not allow
None
to be used as a key or value.- add_entry_listener(include_value: bool = False, key: Optional[KeyType] = None, predicate: Optional[Predicate] = None, added_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, removed_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, updated_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, evicted_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, evict_all_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, clear_all_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, merged_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, expired_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, loaded_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None) Future[str] [source]¶
Adds a continuous entry listener for this map.
Listener will get notified for map events filtered with given parameters.
- Parameters:
include_value – Whether received event should include the value or not.
key – Key for filtering the events.
predicate – Predicate for filtering the events.
added_func – Function to be called when an entry is added to map.
removed_func – Function to be called when an entry is removed from map.
updated_func – Function to be called when an entry is updated.
evicted_func – Function to be called when an entry is evicted from map.
evict_all_func – Function to be called when entries are evicted from map.
clear_all_func – Function to be called when entries are cleared from map.
merged_func – Function to be called when WAN replicated entry is merged.
expired_func – Function to be called when an entry’s live time is expired.
loaded_func – Function to be called when an entry is loaded from a map loader.
- Returns:
A registration id which is used as a key to remove the listener.
- add_index(attributes: Optional[Sequence[str]] = None, index_type: Union[int, str] = 0, name: Optional[str] = None, bitmap_index_options: Optional[Dict[str, Any]] = None) Future[None] [source]¶
Adds an index to this map for the specified entries so that queries can run faster.
Example
Let’s say your map values are Employee objects.
>>> class Employee(IdentifiedDataSerializable): >>> active = false >>> age = None >>> name = None >>> #other fields >>> >>> #methods
If you query your values mostly based on age and active fields, you should consider indexing these.
>>> employees = client.get_map("employees") >>> employees.add_index(attributes=["age"]) # Sorted index for range queries >>> employees.add_index(attributes=["active"], index_type=IndexType.HASH)) # Hash index for equality predicates
Index attribute should either have a getter method or be public. You should also make sure to add the indexes before adding entries to this map.
Indexing time is executed in parallel on each partition by operation threads. The Map is not blocked during this operation. The time taken in proportional to the size of the Map and the number Members.
Until the index finishes being created, any searches for the attribute will use a full Map scan, thus avoiding using a partially built index and returning incorrect results.
- Parameters:
attributes – List of indexed attributes.
index_type – Type of the index. By default, set to
SORTED
.name – Name of the index.
bitmap_index_options –
Bitmap index options.
unique_key: (str): The unique key attribute is used as a source of values which uniquely identify each entry being inserted into an index. Defaults to
KEY_ATTRIBUTE_NAME
. See thehazelcast.config.QueryConstants
for possible values.unique_key_transformation (int|str): The transformation is applied to every value extracted from the unique key attribue. Defaults to
OBJECT
. See thehazelcast.config.UniqueKeyTransformation
for possible values.
- add_interceptor(interceptor: Any) Future[str] [source]¶
Adds an interceptor for this map.
Added interceptor will intercept operations and execute user defined methods.
- Parameters:
interceptor – Interceptor for the map which includes user defined methods.
- Returns:
Id of registered interceptor.
- aggregate(aggregator: Aggregator[AggregatorResultType], predicate: Optional[Predicate] = None) Future[AggregatorResultType] [source]¶
Applies the aggregation logic on map entries and filter the result with the predicate, if given.
- Parameters:
aggregator – Aggregator to aggregate the entries with.
predicate – Predicate to filter the entries with.
- Returns:
The result of the aggregation.
- clear() Future[None] [source]¶
Clears the map.
The
MAP_CLEARED
event is fired for any registered listeners.
- contains_key(key: KeyType) Future[bool] [source]¶
Determines whether this map contains an entry with the key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
- Returns:
True
if this map contains an entry for the specified key,False
otherwise.
- contains_value(value: ValueType) Future[bool] [source]¶
Determines whether this map contains one or more keys for the specified value.
- Parameters:
value – The specified value.
- Returns:
True
if this map contains an entry for the specified value,False
otherwise.
- delete(key: KeyType) Future[None] [source]¶
Removes the mapping for a key from this map if it is present (optional operation).
Unlike remove(object), this operation does not return the removed value, which avoids the serialization cost of the returned value. If the removed value will not be used, a delete operation is preferred over a remove operation for better performance.
The map will not contain a mapping for the specified key once the call returns.
Warning
This method breaks the contract of EntryListener. When an entry is removed by delete(), it fires an
EntryEvent
with aNone
old_value
. Also, a listener with predicates will haveNone
values, so only the keys can be queried via predicates.- Parameters:
key – Key of the mapping to be deleted.
- entry_set(predicate: Optional[Predicate] = None) Future[List[Tuple[KeyType, ValueType]]] [source]¶
Returns a list clone of the mappings contained in this map.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Parameters:
predicate – Predicate for the map to filter entries.
- Returns:
The list of key-value tuples in the map.
- evict(key: KeyType) Future[bool] [source]¶
Evicts the specified key from this map.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – Key to evict.
- Returns:
True
if the key is evicted,False
otherwise.
- evict_all() Future[None] [source]¶
Evicts all keys from this map except the locked ones.
The
EVICT_ALL
event is fired for any registered listeners.
- execute_on_entries(entry_processor: Any, predicate: Optional[Predicate] = None) Future[List[Any]] [source]¶
Applies the user defined EntryProcessor to all the entries in the map or entries in the map which satisfies the predicate if provided. Returns the results mapped by each key in the map.
- Parameters:
entry_processor – A stateful serializable object which represents the EntryProcessor defined on server side. This object must have a serializable EntryProcessor counter part registered on server side with the actual
com.hazelcast.map.EntryProcessor
implementation.predicate – Predicate for filtering the entries.
- Returns:
List of map entries which includes the keys and the results of the entry process.
- execute_on_key(key: KeyType, entry_processor: Any) Future[Any] [source]¶
Applies the user defined EntryProcessor to the entry mapped by the key. Returns the object which is the result of EntryProcessor’s process method.
- Parameters:
key – Specified key for the entry to be processed.
entry_processor – A stateful serializable object which represents the EntryProcessor defined on server side. This object must have a serializable EntryProcessor counter part registered on server side with the actual
com.hazelcast.map.EntryProcessor
implementation.
- Returns:
Result of entry process.
- execute_on_keys(keys: Sequence[KeyType], entry_processor: Any) Future[List[Any]] [source]¶
Applies the user defined EntryProcessor to the entries mapped by the collection of keys. Returns the results mapped by each key in the collection.
- Parameters:
keys – Collection of the keys for the entries to be processed.
entry_processor – A stateful serializable object which represents the EntryProcessor defined on server side. This object must have a serializable EntryProcessor counter part registered on server side with the actual
com.hazelcast.map.EntryProcessor
implementation.
- Returns:
List of map entries which includes the keys and the results of the entry process.
- force_unlock(key: KeyType) Future[None] [source]¶
Releases the lock for the specified key regardless of the lock owner.
It always successfully unlocks the key, never blocks, and returns immediately.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key to lock.
- get(key: KeyType) Future[Optional[ValueType]] [source]¶
Returns the value for the specified key, or
None
if this map does not contain this key.Warning
This method returns a clone of original value, modifying the returned value does not change the actual value in the map. One should put modified value back to make changes visible to all nodes.
>>> value = my_map.get(key) >>> value.update_some_property() >>> my_map.put(key,value)
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
- Returns:
The value for the specified key.
- get_all(keys: Sequence[KeyType]) Future[Dict[KeyType, ValueType]] [source]¶
Returns the entries for the given keys.
Warning
The returned map is NOT backed by the original map, so changes to the original map are NOT reflected in the returned map, and vice-versa.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
keys – Keys to get.
- Returns:
Dictionary of map entries.
- get_entry_view(key: KeyType) Future[SimpleEntryView[KeyType, ValueType]] [source]¶
Returns the EntryView for the specified key.
Warning
This method returns a clone of original mapping, modifying the returned value does not change the actual value in the map. One should put modified value back to make changes visible to all nodes.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key of the entry.
- Returns:
EntryView of the specified key.
- is_empty() Future[bool] [source]¶
Returns whether this map contains no key-value mappings or not.
- Returns:
True
if this map contains no key-value mappings,False
otherwise.
- is_locked(key: KeyType) Future[bool] [source]¶
Checks the lock for the specified key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key that is checked for lock
- Returns:
True
if lock is acquired,False
otherwise.
- key_set(predicate: Optional[Predicate] = None) Future[List[ValueType]] [source]¶
Returns a List clone of the keys contained in this map or the keys of the entries filtered with the predicate if provided.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Parameters:
predicate – Predicate to filter the entries.
- Returns:
A list of the clone of the keys.
- load_all(keys: Optional[Sequence[KeyType]] = None, replace_existing_values: bool = True) Future[None] [source]¶
Loads all keys from the store at server side or loads the given keys if provided.
- Parameters:
keys – Keys of the entry values to load.
replace_existing_values – Whether the existing values will be replaced or not with those loaded from the server side MapLoader.
- lock(key: KeyType, lease_time: Optional[float] = None) Future[None] [source]¶
Acquires the lock for the specified key infinitely or for the specified lease time if provided.
If the lock is not available, the current thread becomes disabled for thread scheduling purposes and lies dormant until the lock has been acquired.
You get a lock whether the value is present in the map or not. Other threads (possibly on other systems) would block on their invoke of lock() until the non-existent key is unlocked. If the lock holder introduces the key to the map, the put() operation is not blocked. If a thread not holding a lock on the non-existent key tries to introduce the key while a lock exists on the non-existent key, the put() operation blocks until it is unlocked.
Scope of the lock is this map only. Acquired lock is only for the key in this map.
Locks are re-entrant; so, if the key is locked N times, it should be unlocked N times before another thread can acquire it.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key to lock.
lease_time – Time in seconds to wait before releasing the lock.
- project(projection: Projection[ProjectionType], predicate: Optional[Predicate] = None) Future[ProjectionType] [source]¶
Applies the projection logic on map entries and filter the result with the predicate, if given.
- Parameters:
projection – Projection to project the entries with.
predicate – Predicate to filter the entries with.
- Returns:
The result of the projection.
- put(key: KeyType, value: ValueType, ttl: Optional[float] = None, max_idle: Optional[float] = None) Future[Optional[ValueType]] [source]¶
Associates the specified value with the specified key in this map.
If the map previously contained a mapping for the key, the old value is replaced by the specified value. If ttl is provided, entry will expire and get evicted after the ttl.
Warning
This method returns a clone of the previous value, not the original (identically equal) value previously put into the map.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
value – The value to associate with the key.
ttl – Maximum time in seconds for this entry to stay in the map. If not provided, the value configured on the server side configuration will be used. Setting this to
0
means infinite time-to-live.max_idle – Maximum time in seconds for this entry to stay idle in the map. If not provided, the value configured on the server side configuration will be used. Setting this to
0
means infinite max idle time.
- Returns:
Previous value associated with key or
None
if there was no mapping for key.
- put_all(map: Dict[KeyType, ValueType]) Future[None] [source]¶
Copies all the mappings from the specified map to this map.
No atomicity guarantees are given. In the case of a failure, some key-value tuples may get written, while others are not.
- Parameters:
map – Dictionary which includes mappings to be stored in this map.
- put_if_absent(key: KeyType, value: ValueType, ttl: Optional[float] = None, max_idle: Optional[float] = None) Future[Optional[ValueType]] [source]¶
Associates the specified key with the given value if it is not already associated.
If ttl is provided, entry will expire and get evicted after the ttl.
This is equivalent to below, except that the action is performed atomically:
>>> if not my_map.contains_key(key): >>> return my_map.put(key,value) >>> else: >>> return my_map.get(key)
Warning
This method returns a clone of the previous value, not the original (identically equal) value previously put into the map.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – Key of the entry.
value – Value of the entry.
ttl – Maximum time in seconds for this entry to stay in the map. If not provided, the value configured on the server side configuration will be used. Setting this to
0
means infinite time-to-live.max_idle – Maximum time in seconds for this entry to stay idle in the map. If not provided, the value configured on the server side configuration will be used. Setting this to
0
means infinite max idle time.
- Returns:
Old value of the entry.
- put_transient(key: KeyType, value: ValueType, ttl: Optional[float] = None, max_idle: Optional[float] = None) Future[None] [source]¶
Same as
put
, but MapStore defined at the server side will not be called.Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – Key of the entry.
value – Value of the entry.
ttl – Maximum time in seconds for this entry to stay in the map. If not provided, the value configured on the server side configuration will be used. Setting this to
0
means infinite time-to-live.max_idle – Maximum time in seconds for this entry to stay idle in the map. If not provided, the value configured on the server side configuration will be used. Setting this to
0
means infinite max idle time.
- remove(key: KeyType) Future[Optional[ValueType]] [source]¶
Removes the mapping for a key from this map if it is present.
The map will not contain a mapping for the specified key once the call returns.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – Key of the mapping to be deleted.
- Returns:
The previous value associated with key, or
None
if there was no mapping for key.
- remove_all(predicate: Predicate) Future[None] [source]¶
Removes all entries which match with the supplied predicate.
- Parameters:
predicate – Used to select entries to be removed from map.
- remove_if_same(key: KeyType, value: ValueType) Future[bool] [source]¶
Removes the entry for a key only if it is currently mapped to a given value.
This is equivalent to below, except that the action is performed atomically:
>>> if my_map.contains_key(key) and my_map.get(key) == value: >>> my_map.remove(key) >>> return True >>> else: >>> return False
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
value – Remove the key if it has this value.
- Returns:
True
if the value was removed,False
otherwise.
- remove_entry_listener(registration_id: str) Future[bool] [source]¶
Removes the specified entry listener.
Returns silently if there is no such listener added before.
- Parameters:
registration_id – Id of registered listener.
- Returns:
True
if registration is removed,False
otherwise.
- remove_interceptor(registration_id: str) Future[bool] [source]¶
Removes the given interceptor for this map, so it will not intercept operations anymore.
- Parameters:
registration_id – Registration ID of the map interceptor.
- Returns:
True
if the interceptor is removed,False
otherwise.
- replace(key: KeyType, value: ValueType) Future[Optional[ValueType]] [source]¶
Replaces the entry for a key only if it is currently mapped to some value.
This is equivalent to below, except that the action is performed atomically:
>>> if my_map.contains_key(key): >>> return my_map.put(key,value) >>> else: >>> return None
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.Warning
This method returns a clone of the previous value, not the original (identically equal) value previously put into the map.
- Parameters:
key – The specified key.
value – The value to replace the previous value.
- Returns:
Previous value associated with key, or
None
if there was no mapping for key.
- replace_if_same(key: ValueType, old_value: ValueType, new_value: ValueType) Future[bool] [source]¶
Replaces the entry for a key only if it is currently mapped to a given value.
This is equivalent to below, except that the action is performed atomically:
>>> if my_map.contains_key(key) and my_map.get(key) == old_value: >>> my_map.put(key, new_value) >>> return True >>> else: >>> return False
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
old_value – Replace the key value if it is the old value.
new_value – The new value to replace the old value.
- Returns:
True
if the value was replaced,False
otherwise.
- set(key: KeyType, value: ValueType, ttl: Optional[float] = None, max_idle: Optional[float] = None) Future[None] [source]¶
Puts an entry into this map.
Similar to the put operation except that set doesn’t return the old value, which is more efficient. If ttl is provided, entry will expire and get evicted after the ttl.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – Key of the entry.
value – Value of the entry.
ttl – Maximum time in seconds for this entry to stay in the map. If not provided, the value configured on the server side configuration will be used. Setting this to
0
means infinite time-to-live.max_idle – Maximum time in seconds for this entry to stay idle in the map. If not provided, the value configured on the server side configuration will be used. Setting this to
0
means infinite max idle time.
- set_ttl(key: KeyType, ttl: float) Future[None] [source]¶
Updates the TTL (time to live) value of the entry specified by the given key with a new TTL value.
New TTL value is valid starting from the time this operation is invoked, not since the time the entry was created. If the entry does not exist or is already expired, this call has no effect.
- Parameters:
key – The key of the map entry.
ttl – Maximum time in seconds for this entry to stay in the map. Setting this to
0
means infinite time-to-live.
- size() Future[int] [source]¶
Returns the number of entries in this map.
- Returns:
Number of entries in this map.
- try_lock(key: KeyType, lease_time: Optional[float] = None, timeout: float = 0) Future[bool] [source]¶
Tries to acquire the lock for the specified key.
When the lock is not available:
If the timeout is not provided, the current thread doesn’t wait and returns
False
immediately.If the timeout is provided, the current thread becomes disabled for thread scheduling purposes and lies dormant until one of the followings happens:
The lock is acquired by the current thread, or
The specified waiting time elapses.
If the lease time is provided, lock will be released after this time elapses.
- Parameters:
key – Key to lock in this map.
lease_time – Time in seconds to wait before releasing the lock.
timeout – Maximum time in seconds to wait for the lock.
- Returns:
True
if the lock was acquired,False
otherwise.
- try_put(key: KeyType, value: ValueType, timeout: float = 0) Future[bool] [source]¶
Tries to put the given key and value into this map and returns immediately if timeout is not provided.
If timeout is provided, operation waits until it is completed or timeout is reached.
- Parameters:
key – Key of the entry.
value – Value of the entry.
timeout – Maximum time in seconds to wait.
- Returns:
True
if the put is successful,False
otherwise.
- try_remove(key: KeyType, timeout: float = 0) Future[bool] [source]¶
Tries to remove the given key from this map and returns immediately if timeout is not provided.
If timeout is provided, operation waits until it is completed or timeout is reached.
- Parameters:
key – Key of the entry to be deleted.
timeout – Maximum time in seconds to wait.
- Returns:
True
if the remove is successful,False
otherwise.
- unlock(key: KeyType) Future[None] [source]¶
Releases the lock for the specified key.
It never blocks and returns immediately. If the current thread is the holder of this lock, then the hold count is decremented. If the hold count is zero, then the lock is released.
- Parameters:
key – The key to lock.
- values(predicate: Optional[Predicate] = None) Future[List[ValueType]] [source]¶
Returns a list clone of the values contained in this map or values of the entries which are filtered with the predicate if provided.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Parameters:
predicate – Predicate to filter the entries.
- Returns:
A list of clone of the values contained in this map.
MultiMap¶
- class MultiMap(service_name, name, context)[source]¶
Bases:
Proxy
[BlockingMultiMap
],Generic
[KeyType
,ValueType
]A specialized map whose keys can be associated with multiple values.
- add_entry_listener(include_value: bool = False, key: Optional[KeyType] = None, added_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, removed_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, clear_all_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None) Future[str] [source]¶
Adds an entry listener for this multimap.
The listener will be notified for all multimap add/remove/clear-all events.
- Parameters:
include_value – Whether received event should include the value or not.
key – Key for filtering the events.
added_func – Function to be called when an entry is added to map.
removed_func – Function to be called when an entry is removed from map.
clear_all_func – Function to be called when entries are cleared from map.
- Returns:
A registration id which is used as a key to remove the listener.
- contains_key(key: KeyType) Future[bool] [source]¶
Determines whether this multimap contains an entry with the key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
- Returns:
True
if this multimap contains an entry for the specified key,False
otherwise.
- contains_value(value: ValueType) Future[bool] [source]¶
Determines whether this map contains one or more keys for the specified value.
- Parameters:
value – The specified value.
- Returns:
True
if this multimap contains an entry for the specified value,False
otherwise.
- contains_entry(key: KeyType, value: ValueType) Future[bool] [source]¶
Returns whether the multimap contains an entry with the value.
- Parameters:
key – The specified key.
value – The specified value.
- Returns:
True
if this multimap contains the key-value tuple,False
otherwise.
- entry_set() Future[List[Tuple[KeyType, ValueType]]] [source]¶
Returns the list of key-value tuples in the multimap.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
The list of key-value tuples in the multimap.
- get(key: KeyType) Future[Optional[List[ValueType]]] [source]¶
Returns the list of values associated with the key.
None
if this map does not contain this key.Warning
This method uses
__hash__
and__eq__
of the binary form of the key, not the actual implementations of__hash__
and__eq__
defined in the key’s class.Warning
The list is NOT backed by the multimap, so changes to the map are list reflected in the collection, and vice-versa.
- Parameters:
key – The specified key.
- Returns:
The list of the values associated with the specified key.
- is_locked(key: KeyType) Future[bool] [source]¶
Checks the lock for the specified key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key that is checked for lock.
- Returns:
True
if lock is acquired,False
otherwise.
- force_unlock(key: KeyType) Future[None] [source]¶
Releases the lock for the specified key regardless of the lock owner.
It always successfully unlocks the key, never blocks, and returns immediately.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key to lock.
- key_set() Future[List[KeyType]] [source]¶
Returns the list of keys in the multimap.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
A list of the clone of the keys.
- lock(key: KeyType, lease_time: Optional[float] = None) Future[None] [source]¶
Acquires the lock for the specified key infinitely or for the specified lease time if provided.
If the lock is not available, the current thread becomes disabled for thread scheduling purposes and lies dormant until the lock has been acquired.
Scope of the lock is this map only. Acquired lock is only for the key in this map.
Locks are re-entrant; so, if the key is locked N times, it should be unlocked N times before another thread can acquire it.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key to lock.
lease_time – Time in seconds to wait before releasing the lock.
- remove(key: KeyType, value: ValueType) Future[bool] [source]¶
Removes the given key-value tuple from the multimap.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key of the entry to remove.
value – The value of the entry to remove.
- Returns:
True
if the size of the multimap changed after the remove operation,False
otherwise.
- remove_all(key: KeyType) Future[List[ValueType]] [source]¶
Removes all the entries with the given key and returns the value list associated with this key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.Warning
The returned list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Parameters:
key – The key of the entries to remove.
- Returns:
The collection of removed values associated with the given key.
- put(key: KeyType, value: ValueType) Future[bool] [source]¶
Stores a key-value tuple in the multimap.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key to be stored.
value – The value to be stored.
- Returns:
True
if size of the multimap is increased,False
if the multimap already contains the key-value tuple.
- put_all(multimap: Dict[KeyType, Sequence[ValueType]]) Future[None] [source]¶
Stores the given Map in the MultiMap.
The results of concurrently mutating the given map are undefined. No atomicity guarantees are given. It could be that in case of failure some of the key/value-pairs get written, while others are not.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
multimap – the map corresponds to multimap entries.
- remove_entry_listener(registration_id: str) Future[bool] [source]¶
Removes the specified entry listener.
Returns silently if there is no such listener added before.
- Parameters:
registration_id – Id of registered listener.
- Returns:
True
if registration is removed,False
otherwise.
- size() Future[int] [source]¶
Returns the number of entries in this multimap.
- Returns:
Number of entries in this multimap.
- value_count(key: KeyType) Future[int] [source]¶
Returns the number of values that match the given key in the multimap.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key whose values count is to be returned.
- Returns:
The number of values that match the given key in the multimap.
- values() Future[List[ValueType]] [source]¶
Returns the list of values in the multimap.
Warning
The returned list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
The list of values in the multimap.
- try_lock(key: KeyType, lease_time: Optional[float] = None, timeout: float = 0) Future[bool] [source]¶
Tries to acquire the lock for the specified key.
When the lock is not available:
If the timeout is not provided, the current thread doesn’t wait and returns
False
immediately.If the timeout is provided, the current thread becomes disabled for thread scheduling purposes and lies dormant until one of the followings happens:
The lock is acquired by the current thread, or
The specified waiting time elapses.
If the lease time is provided, lock will be released after this time elapses.
- Parameters:
key – Key to lock in this map.
lease_time – Time in seconds to wait before releasing the lock.
timeout – Maximum time in seconds to wait for the lock.
- Returns:
True
if the lock was acquired,False
otherwise.
- unlock(key: KeyType) Future[None] [source]¶
Releases the lock for the specified key. It never blocks and returns immediately.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key to lock.
- blocking() BlockingMultiMap[KeyType, ValueType] [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingMultiMap(wrapped: MultiMap[KeyType, ValueType])[source]¶
Bases:
MultiMap
[KeyType
,ValueType
]- name¶
- service_name¶
- add_entry_listener(include_value: bool = False, key: Optional[KeyType] = None, added_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, removed_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, clear_all_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None) str [source]¶
Adds an entry listener for this multimap.
The listener will be notified for all multimap add/remove/clear-all events.
- Parameters:
include_value – Whether received event should include the value or not.
key – Key for filtering the events.
added_func – Function to be called when an entry is added to map.
removed_func – Function to be called when an entry is removed from map.
clear_all_func – Function to be called when entries are cleared from map.
- Returns:
A registration id which is used as a key to remove the listener.
- contains_key(key: KeyType) bool [source]¶
Determines whether this multimap contains an entry with the key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
- Returns:
True
if this multimap contains an entry for the specified key,False
otherwise.
- contains_value(value: ValueType) bool [source]¶
Determines whether this map contains one or more keys for the specified value.
- Parameters:
value – The specified value.
- Returns:
True
if this multimap contains an entry for the specified value,False
otherwise.
- contains_entry(key: KeyType, value: ValueType) bool [source]¶
Returns whether the multimap contains an entry with the value.
- Parameters:
key – The specified key.
value – The specified value.
- Returns:
True
if this multimap contains the key-value tuple,False
otherwise.
- entry_set() List[Tuple[KeyType, ValueType]] [source]¶
Returns the list of key-value tuples in the multimap.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
The list of key-value tuples in the multimap.
- get(key: KeyType) Optional[List[ValueType]] [source]¶
Returns the list of values associated with the key.
None
if this map does not contain this key.Warning
This method uses
__hash__
and__eq__
of the binary form of the key, not the actual implementations of__hash__
and__eq__
defined in the key’s class.Warning
The list is NOT backed by the multimap, so changes to the map are list reflected in the collection, and vice-versa.
- Parameters:
key – The specified key.
- Returns:
The list of the values associated with the specified key.
- is_locked(key: KeyType) bool [source]¶
Checks the lock for the specified key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key that is checked for lock.
- Returns:
True
if lock is acquired,False
otherwise.
- force_unlock(key: KeyType) None [source]¶
Releases the lock for the specified key regardless of the lock owner.
It always successfully unlocks the key, never blocks, and returns immediately.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key to lock.
- key_set() List[KeyType] [source]¶
Returns the list of keys in the multimap.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
A list of the clone of the keys.
- lock(key: KeyType, lease_time: Optional[float] = None) None [source]¶
Acquires the lock for the specified key infinitely or for the specified lease time if provided.
If the lock is not available, the current thread becomes disabled for thread scheduling purposes and lies dormant until the lock has been acquired.
Scope of the lock is this map only. Acquired lock is only for the key in this map.
Locks are re-entrant; so, if the key is locked N times, it should be unlocked N times before another thread can acquire it.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key to lock.
lease_time – Time in seconds to wait before releasing the lock.
- remove(key: KeyType, value: ValueType) bool [source]¶
Removes the given key-value tuple from the multimap.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key of the entry to remove.
value – The value of the entry to remove.
- Returns:
True
if the size of the multimap changed after the remove operation,False
otherwise.
- remove_all(key: KeyType) List[ValueType] [source]¶
Removes all the entries with the given key and returns the value list associated with this key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.Warning
The returned list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Parameters:
key – The key of the entries to remove.
- Returns:
The collection of removed values associated with the given key.
- put(key: KeyType, value: ValueType) bool [source]¶
Stores a key-value tuple in the multimap.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key to be stored.
value – The value to be stored.
- Returns:
True
if size of the multimap is increased,False
if the multimap already contains the key-value tuple.
- put_all(multimap: Dict[KeyType, Sequence[ValueType]]) None [source]¶
Stores the given Map in the MultiMap.
The results of concurrently mutating the given map are undefined. No atomicity guarantees are given. It could be that in case of failure some of the key/value-pairs get written, while others are not.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
multimap – the map corresponds to multimap entries.
- remove_entry_listener(registration_id: str) bool [source]¶
Removes the specified entry listener.
Returns silently if there is no such listener added before.
- Parameters:
registration_id – Id of registered listener.
- Returns:
True
if registration is removed,False
otherwise.
- size() int [source]¶
Returns the number of entries in this multimap.
- Returns:
Number of entries in this multimap.
- value_count(key: KeyType) int [source]¶
Returns the number of values that match the given key in the multimap.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key whose values count is to be returned.
- Returns:
The number of values that match the given key in the multimap.
- values() List[ValueType] [source]¶
Returns the list of values in the multimap.
Warning
The returned list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
The list of values in the multimap.
- try_lock(key: KeyType, lease_time: Optional[float] = None, timeout: float = 0) bool [source]¶
Tries to acquire the lock for the specified key.
When the lock is not available:
If the timeout is not provided, the current thread doesn’t wait and returns
False
immediately.If the timeout is provided, the current thread becomes disabled for thread scheduling purposes and lies dormant until one of the followings happens:
The lock is acquired by the current thread, or
The specified waiting time elapses.
If the lease time is provided, lock will be released after this time elapses.
- Parameters:
key – Key to lock in this map.
lease_time – Time in seconds to wait before releasing the lock.
timeout – Maximum time in seconds to wait for the lock.
- Returns:
True
if the lock was acquired,False
otherwise.
- unlock(key: KeyType) None [source]¶
Releases the lock for the specified key. It never blocks and returns immediately.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The key to lock.
- destroy() bool [source]¶
Destroys this proxy.
- Returns:
True
if this proxy is destroyed successfully,False
otherwise.
- blocking() BlockingMultiMap[KeyType, ValueType] [source]¶
Returns a version of this proxy with only blocking method calls.
Queue¶
- class Queue(service_name, name, context)[source]¶
Bases:
PartitionSpecificProxy
[BlockingQueue
],Generic
[ItemType
]Concurrent, blocking, distributed, observable queue.
Queue is not a partitioned data-structure. All of the Queue content is stored in a single machine (and in the backup). Queue will not scale by adding more members in the cluster.
- add(item: ItemType) Future[bool] [source]¶
Adds the specified item to this queue if there is available space.
- Parameters:
item – The specified item.
- Returns:
True
if element is successfully added,False
otherwise.
- add_all(items: Sequence[ItemType]) Future[bool] [source]¶
Adds the elements in the specified collection to this queue.
- Parameters:
items – Collection which includes the items to be added.
- Returns:
True
if this queue is changed after call,False
otherwise.
- add_listener(include_value: bool = False, item_added_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None, item_removed_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None) Future[str] [source]¶
- Adds an item listener for this queue. Listener will be notified for
all queue add/remove events.
- Parameters:
include_value – Whether received events include the updated item or not.
item_added_func – Function to be called when an item is added to this set.
item_removed_func – Function to be called when an item is deleted from this set.
- Returns:
A registration id which is used as a key to remove the listener.
- contains(item: ItemType) Future[bool] [source]¶
Determines whether this queue contains the specified item or not.
- Parameters:
item – The specified item to be searched.
- Returns:
True
if the specified item exists in this queue,False
otherwise.
- contains_all(items: Sequence[ItemType]) Future[bool] [source]¶
Determines whether this queue contains all of the items in the specified collection or not.
- Parameters:
items – The specified collection which includes the items to be searched.
- Returns:
True
if all of the items in the specified collection exist in this queue,False
otherwise.
- drain_to(target_list: List[ItemType], max_size: int = -1) Future[int] [source]¶
Transfers all available items to the given target_list and removes these items from this queue.
If a max_size is specified, it transfers at most the given number of items. In case of a failure, an item can exist in both collections or none of them.
This operation may be more efficient than polling elements repeatedly and putting into collection.
- Parameters:
target_list – the list where the items in this queue will be transferred.
max_size – The maximum number items to transfer.
- Returns:
Number of transferred items.
- iterator() Future[List[ItemType]] [source]¶
Returns all the items in this queue.
- Returns:
Collection of items in this queue.
- is_empty() Future[bool] [source]¶
Determines whether this set is empty or not.
- Returns:
True
if this queue is empty,False
otherwise.
- offer(item: ItemType, timeout: float = 0) Future[bool] [source]¶
Inserts the specified element into this queue if it is possible to do so immediately without violating capacity restrictions.
If there is no space currently available:
If the timeout is provided, it waits until this timeout elapses and returns the result.
If the timeout is not provided, returns
False
immediately.
- Parameters:
item – The item to be added.
timeout – Maximum time in seconds to wait for addition.
- Returns:
True
if the element was added to this queue,False
otherwise.
- peek() Future[Optional[ItemType]] [source]¶
Retrieves the head of queue without removing it from the queue.
- Returns:
The head of this queue, or
None
if this queue is empty.
- poll(timeout: float = 0) Future[Optional[ItemType]] [source]¶
Retrieves and removes the head of this queue.
If this queue is empty:
If the timeout is provided, it waits until this timeout elapses and returns the result.
If the timeout is not provided, returns
None
.
- Parameters:
timeout – Maximum time in seconds to wait for addition.
- Returns:
The head of this queue, or
None
if this queue is empty or specified timeout elapses before an item is added to the queue.
- put(item: ItemType) Future[None] [source]¶
Adds the specified element into this queue.
If there is no space, it waits until necessary space becomes available.
- Parameters:
item – The specified item.
- remaining_capacity() Future[int] [source]¶
Returns the remaining capacity of this queue.
- Returns:
Remaining capacity of this queue.
- remove(item: ItemType) Future[bool] [source]¶
Removes the specified element from the queue if it exists.
- Parameters:
item – The specified element to be removed.
- Returns:
True
if the specified element exists in this queue,False
otherwise.
- remove_all(items: Sequence[ItemType]) Future[bool] [source]¶
Removes all of the elements of the specified collection from this queue.
- Parameters:
items – The specified collection.
- Returns:
True
if the call changed this queue,False
otherwise.
- remove_listener(registration_id: str) Future[bool] [source]¶
Removes the specified item listener.
Returns silently if the specified listener was not added before.
- Parameters:
registration_id – Id of the listener to be deleted.
- Returns:
True
if the item listener is removed,False
otherwise.
- retain_all(items: Sequence[ItemType]) Future[bool] [source]¶
Removes the items which are not contained in the specified collection.
In other words, only the items that are contained in the specified collection will be retained.
- Parameters:
items – Collection which includes the elements to be retained in this set.
- Returns:
True
if this queue changed as a result of the call,False
otherwise.
- size() Future[int] [source]¶
Returns the number of elements in this collection.
If the size is greater than
2**31 - 1
, it returns2**31 - 1
.- Returns:
Size of the queue.
- take() Future[ItemType] [source]¶
Retrieves and removes the head of this queue, if necessary, waits until an item becomes available.
- Returns:
The head of this queue.
- blocking() BlockingQueue[ItemType] [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingQueue(wrapped: Queue[ItemType])[source]¶
Bases:
Queue
[ItemType
]- name¶
- service_name¶
- add(item: ItemType) bool [source]¶
Adds the specified item to this queue if there is available space.
- Parameters:
item – The specified item.
- Returns:
True
if element is successfully added,False
otherwise.
- add_all(items: Sequence[ItemType]) bool [source]¶
Adds the elements in the specified collection to this queue.
- Parameters:
items – Collection which includes the items to be added.
- Returns:
True
if this queue is changed after call,False
otherwise.
- add_listener(include_value: bool = False, item_added_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None, item_removed_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None) str [source]¶
- Adds an item listener for this queue. Listener will be notified for
all queue add/remove events.
- Parameters:
include_value – Whether received events include the updated item or not.
item_added_func – Function to be called when an item is added to this set.
item_removed_func – Function to be called when an item is deleted from this set.
- Returns:
A registration id which is used as a key to remove the listener.
- contains(item: ItemType) bool [source]¶
Determines whether this queue contains the specified item or not.
- Parameters:
item – The specified item to be searched.
- Returns:
True
if the specified item exists in this queue,False
otherwise.
- contains_all(items: Sequence[ItemType]) bool [source]¶
Determines whether this queue contains all of the items in the specified collection or not.
- Parameters:
items – The specified collection which includes the items to be searched.
- Returns:
True
if all of the items in the specified collection exist in this queue,False
otherwise.
- drain_to(target_list: List[ItemType], max_size: int = -1) int [source]¶
Transfers all available items to the given target_list and removes these items from this queue.
If a max_size is specified, it transfers at most the given number of items. In case of a failure, an item can exist in both collections or none of them.
This operation may be more efficient than polling elements repeatedly and putting into collection.
- Parameters:
target_list – the list where the items in this queue will be transferred.
max_size – The maximum number items to transfer.
- Returns:
Number of transferred items.
- iterator() List[ItemType] [source]¶
Returns all the items in this queue.
- Returns:
Collection of items in this queue.
- is_empty() bool [source]¶
Determines whether this set is empty or not.
- Returns:
True
if this queue is empty,False
otherwise.
- offer(item: ItemType, timeout: float = 0) bool [source]¶
Inserts the specified element into this queue if it is possible to do so immediately without violating capacity restrictions.
If there is no space currently available:
If the timeout is provided, it waits until this timeout elapses and returns the result.
If the timeout is not provided, returns
False
immediately.
- Parameters:
item – The item to be added.
timeout – Maximum time in seconds to wait for addition.
- Returns:
True
if the element was added to this queue,False
otherwise.
- peek() Optional[ItemType] [source]¶
Retrieves the head of queue without removing it from the queue.
- Returns:
The head of this queue, or
None
if this queue is empty.
- poll(timeout: float = 0) Optional[ItemType] [source]¶
Retrieves and removes the head of this queue.
If this queue is empty:
If the timeout is provided, it waits until this timeout elapses and returns the result.
If the timeout is not provided, returns
None
.
- Parameters:
timeout – Maximum time in seconds to wait for addition.
- Returns:
The head of this queue, or
None
if this queue is empty or specified timeout elapses before an item is added to the queue.
- put(item: ItemType) None [source]¶
Adds the specified element into this queue.
If there is no space, it waits until necessary space becomes available.
- Parameters:
item – The specified item.
- remaining_capacity() int [source]¶
Returns the remaining capacity of this queue.
- Returns:
Remaining capacity of this queue.
- remove(item: ItemType) bool [source]¶
Removes the specified element from the queue if it exists.
- Parameters:
item – The specified element to be removed.
- Returns:
True
if the specified element exists in this queue,False
otherwise.
- remove_all(items: Sequence[ItemType]) bool [source]¶
Removes all of the elements of the specified collection from this queue.
- Parameters:
items – The specified collection.
- Returns:
True
if the call changed this queue,False
otherwise.
- remove_listener(registration_id: str) bool [source]¶
Removes the specified item listener.
Returns silently if the specified listener was not added before.
- Parameters:
registration_id – Id of the listener to be deleted.
- Returns:
True
if the item listener is removed,False
otherwise.
- retain_all(items: Sequence[ItemType]) bool [source]¶
Removes the items which are not contained in the specified collection.
In other words, only the items that are contained in the specified collection will be retained.
- Parameters:
items – Collection which includes the elements to be retained in this set.
- Returns:
True
if this queue changed as a result of the call,False
otherwise.
- size() int [source]¶
Returns the number of elements in this collection.
If the size is greater than
2**31 - 1
, it returns2**31 - 1
.- Returns:
Size of the queue.
- take() ItemType [source]¶
Retrieves and removes the head of this queue, if necessary, waits until an item becomes available.
- Returns:
The head of this queue.
- destroy() bool [source]¶
Destroys this proxy.
- Returns:
True
if this proxy is destroyed successfully,False
otherwise.
- blocking() BlockingQueue[ItemType] [source]¶
Returns a version of this proxy with only blocking method calls.
PNCounter¶
- class PNCounter(service_name, name, context)[source]¶
Bases:
Proxy
[BlockingPNCounter
]PN (Positive-Negative) CRDT counter.
The counter supports adding and subtracting values as well as retrieving the current counter value. Each replica of this counter can perform operations locally without coordination with the other replicas, thus increasing availability. The counter guarantees that whenever two nodes have received the same set of updates, possibly in a different order, their state is identical, and any conflicting updates are merged automatically. If no new updates are made to the shared state, all nodes that can communicate will eventually have the same data.
When invoking updates from the client, the invocation is remote. This may lead to indeterminate state - the update may be applied but the response has not been received. In this case, the caller will be notified with a TargetDisconnectedError.
The read and write methods provide monotonic read and RYW (read-your-write) guarantees. These guarantees are session guarantees which means that if no replica with the previously observed state is reachable, the session guarantees are lost and the method invocation will throw a ConsistencyLostError. This does not mean that an update is lost. All of the updates are part of some replica and will be eventually reflected in the state of all other replicas. This exception just means that you cannot observe your own writes because all replicas that contain your updates are currently unreachable. After you have received a ConsistencyLostError, you can either wait for a sufficiently up-to-date replica to become reachable in which case the session can be continued or you can reset the session by calling the reset() method. If you have called the reset() method, a new session is started with the next invocation to a CRDT replica.
Notes
The CRDT state is kept entirely on non-lite (data) members. If there aren’t any and the methods here are invoked on a lite member, they will fail with an NoDataMemberInClusterError.
- get() Future[int] [source]¶
Returns the current value of the counter.
- Returns:
The current value of the counter.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- get_and_add(delta: int) Future[int] [source]¶
Adds the given value to the current value and returns the previous value.
- Parameters:
delta – The value to add.
- Returns:
The previous value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- add_and_get(delta: int) Future[int] [source]¶
Adds the given value to the current value and returns the updated value.
- Parameters:
delta – The value to add.
- Returns:
The updated value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- get_and_subtract(delta: int) Future[int] [source]¶
Subtracts the given value from the current value and returns the previous value.
- Parameters:
delta – The value to subtract.
- Returns:
The previous value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- subtract_and_get(delta: int) Future[int] [source]¶
Subtracts the given value from the current value and returns the updated value.
- Parameters:
delta – The value to subtract.
- Returns:
The updated value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- get_and_decrement() Future[int] [source]¶
Decrements the counter value by one and returns the previous value.
- Returns:
The previous value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- decrement_and_get() Future[int] [source]¶
Decrements the counter value by one and returns the updated value.
- Returns:
The updated value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- get_and_increment() Future[int] [source]¶
Increments the counter value by one and returns the previous value.
- Returns:
The previous value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- increment_and_get() Future[int] [source]¶
Increments the counter value by one and returns the updated value.
- Returns:
The updated value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- reset() None [source]¶
Resets the observed state by this PN counter.
This method may be used after a method invocation has thrown a
ConsistencyLostError
to reset the proxy and to be able to start a new session.
- blocking() BlockingPNCounter [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingPNCounter(wrapped: PNCounter)[source]¶
Bases:
PNCounter
- name¶
- service_name¶
- get() int [source]¶
Returns the current value of the counter.
- Returns:
The current value of the counter.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- get_and_add(delta: int) int [source]¶
Adds the given value to the current value and returns the previous value.
- Parameters:
delta – The value to add.
- Returns:
The previous value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- add_and_get(delta: int) int [source]¶
Adds the given value to the current value and returns the updated value.
- Parameters:
delta – The value to add.
- Returns:
The updated value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- get_and_subtract(delta: int) int [source]¶
Subtracts the given value from the current value and returns the previous value.
- Parameters:
delta – The value to subtract.
- Returns:
The previous value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- subtract_and_get(delta: int) int [source]¶
Subtracts the given value from the current value and returns the updated value.
- Parameters:
delta – The value to subtract.
- Returns:
The updated value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- get_and_decrement() int [source]¶
Decrements the counter value by one and returns the previous value.
- Returns:
The previous value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- decrement_and_get() int [source]¶
Decrements the counter value by one and returns the updated value.
- Returns:
The updated value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- get_and_increment() int [source]¶
Increments the counter value by one and returns the previous value.
- Returns:
The previous value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- increment_and_get() int [source]¶
Increments the counter value by one and returns the updated value.
- Returns:
The updated value.
- Raises:
NoDataMemberInClusterError – if the cluster does not contain any data members.
ConsistencyLostError – if the session guarantees have been lost.
- reset() None [source]¶
Resets the observed state by this PN counter.
This method may be used after a method invocation has thrown a
ConsistencyLostError
to reset the proxy and to be able to start a new session.
- destroy() bool [source]¶
Destroys this proxy.
- Returns:
True
if this proxy is destroyed successfully,False
otherwise.
- blocking() BlockingPNCounter [source]¶
Returns a version of this proxy with only blocking method calls.
ReliableTopic¶
- class ReliableMessageListener(*args, **kwds)[source]¶
Bases:
Generic
[MessageType
]A message listener for
ReliableTopic
.A message listener will not be called concurrently (provided that it’s not registered twice). So there is no need to synchronize access to the state it reads or writes.
If a regular function is registered on a reliable topic, the message listener works fine, but it can’t do much more than listen to messages.
This is an enhanced version of that to better integrate with the reliable topic.
Durable Subscription
The ReliableMessageListener allows you to control where you want to start processing a message when the listener is registered. This makes it possible to create a durable subscription by storing the sequence of the last message and using this as the sequence id to start from.
Error handling
The ReliableMessageListener also gives the ability to deal with errors using the
is_terminal()
method. If a plain function is used, then it won’t terminate on errors and it will keep on running. But in some cases it is better to stop running.Global order
The ReliableMessageListener will always get all events in order (global order). It will not get duplicates and there will only be gaps if it is too slow. For more information see
is_loss_tolerant()
.Delivery guarantees
Because the ReliableMessageListener controls which item it wants to continue from upon restart, it is very easy to provide an at-least-once or at-most-once delivery guarantee. The
store_sequence()
is always called before a message is processed; so it can be persisted on some non-volatile storage. When theretrieve_initial_sequence()
returns the stored sequence, then an at-least-once delivery is implemented since the same item is now being processed twice. To implement an at-most-once delivery guarantee, add 1 to the stored sequence when theretrieve_initial_sequence()
is called.- on_message(message: TopicMessage[MessageType]) None [source]¶
Invoked when a message is received for the added reliable topic.
One should not block in this callback. If blocking is necessary, consider delegating that task to an executor or a thread pool.
- Parameters:
message – The message that is received for the topic
- retrieve_initial_sequence() int [source]¶
Retrieves the initial sequence from which this ReliableMessageListener should start.
Return
-1
if there is no initial sequence and you want to start from the next published message.If you intend to create a durable subscriber so you continue from where you stopped the previous time, load the previous sequence and add
1
. If you don’t add one, then you will be receiving the same message twice.- Returns:
The initial sequence.
- store_sequence(sequence: int) None [source]¶
Informs the ReliableMessageListener that it should store the sequence. This method is called before the message is processed. Can be used to make a durable subscription.
- Parameters:
sequence – The sequence.
- is_loss_tolerant() bool [source]¶
Checks if this ReliableMessageListener is able to deal with message loss. Even though the reliable topic promises to be reliable, it can be that a ReliableMessageListener is too slow. Eventually the message won’t be available anymore.
If the ReliableMessageListener is not loss tolerant and the topic detects that there are missing messages, it will terminate the ReliableMessageListener.
- Returns:
True
if the ReliableMessageListener is tolerant towards losing messages.
- is_terminal(error: Exception) bool [source]¶
Checks if the ReliableMessageListener should be terminated based on an error raised while calling
on_message()
.- Parameters:
error – The error raised while calling
on_message()
- Returns:
True
if the ReliableMessageListener should terminate itself,False
if it should keep on running.
- class ReliableTopic(service_name, name, context)[source]¶
Bases:
Proxy
[BlockingReliableTopic
],Generic
[MessageType
]Hazelcast provides distribution mechanism for publishing messages that are delivered to multiple subscribers, which is also known as a publish/subscribe (pub/sub) messaging model. Publish and subscriptions are cluster-wide. When a member subscribes for a topic, it is actually registering for messages published by any member in the cluster, including the new members joined after you added the listener.
Messages are ordered, meaning that listeners(subscribers) will process the messages in the order they are actually published.
Hazelcast’s Reliable Topic uses the same Topic interface as a regular topic. The main difference is that Reliable Topic is backed up by the Ringbuffer data structure, a replicated but not partitioned data structure that stores its data in a ring-like structure.
- publish(message: MessageType) Future[None] [source]¶
Publishes the message to all subscribers of this topic.
- Parameters:
message – The message.
- publish_all(messages: Sequence[MessageType]) Future[None] [source]¶
Publishes all messages to all subscribers of this topic.
- Parameters:
messages – Messages to publish.
- add_listener(listener: Union[ReliableMessageListener, Callable[[TopicMessage[MessageType]], None]]) Future[str] [source]¶
Subscribes to this reliable topic.
It can be either a simple function or an instance of an
ReliableMessageListener
. When a function is passed, aReliableMessageListener
is created out of that with sensible default values.When a message is published, the,
ReliableMessageListener.on_message()
method of the given listener (or the function passed) is called.More than one message listener can be added on one instance.
- Parameters:
listener – Listener to add.
- Returns:
The registration id.
- remove_listener(registration_id: str) Future[bool] [source]¶
Stops receiving messages for the given message listener.
If the given listener already removed, this method does nothing.
- Parameters:
registration_id – ID of listener registration.
- Returns:
True
if registration is removed,False
otherwise.
- blocking() BlockingReliableTopic[MessageType] [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingReliableTopic(wrapped: ReliableTopic[MessageType])[source]¶
Bases:
ReliableTopic
[MessageType
]- name¶
- service_name¶
- publish(message: MessageType) None [source]¶
Publishes the message to all subscribers of this topic.
- Parameters:
message – The message.
- publish_all(messages: Sequence[MessageType]) None [source]¶
Publishes all messages to all subscribers of this topic.
- Parameters:
messages – Messages to publish.
- add_listener(listener: Union[ReliableMessageListener, Callable[[TopicMessage[MessageType]], None]]) str [source]¶
Subscribes to this reliable topic.
It can be either a simple function or an instance of an
ReliableMessageListener
. When a function is passed, aReliableMessageListener
is created out of that with sensible default values.When a message is published, the,
ReliableMessageListener.on_message()
method of the given listener (or the function passed) is called.More than one message listener can be added on one instance.
- Parameters:
listener – Listener to add.
- Returns:
The registration id.
- remove_listener(registration_id: str) bool [source]¶
Stops receiving messages for the given message listener.
If the given listener already removed, this method does nothing.
- Parameters:
registration_id – ID of listener registration.
- Returns:
True
if registration is removed,False
otherwise.
- blocking() BlockingReliableTopic[MessageType] [source]¶
Returns a version of this proxy with only blocking method calls.
ReplicatedMap¶
- class ReplicatedMap(service_name, name, context)[source]¶
Bases:
Proxy
[BlockingReplicatedMap
],Generic
[KeyType
,ValueType
]A ReplicatedMap is a map-like data structure with weak consistency and values locally stored on every node of the cluster.
Whenever a value is written asynchronously, the new value will be internally distributed to all existing cluster members, and eventually every node will have the new value.
When a new node joins the cluster, the new node initially will request existing values from older nodes and replicate them locally.
- add_entry_listener(key: Optional[KeyType] = None, predicate: Optional[Predicate] = None, added_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, removed_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, updated_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, evicted_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, clear_all_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None) Future[str] [source]¶
Adds a continuous entry listener for this map.
Listener will get notified for map events filtered with given parameters.
- Parameters:
key – Key for filtering the events.
predicate – Predicate for filtering the events.
added_func – Function to be called when an entry is added to map.
removed_func – Function to be called when an entry is removed from map.
updated_func – Function to be called when an entry is updated.
evicted_func – Function to be called when an entry is evicted from map.
clear_all_func – Function to be called when entries are cleared from map.
- Returns:
A registration id which is used as a key to remove the listener.
- contains_key(key: KeyType) Future[bool] [source]¶
Determines whether this map contains an entry with the key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
- Returns:
True
if this map contains an entry for the specified key,False
otherwise.
- contains_value(value: ValueType) Future[bool] [source]¶
Determines whether this map contains one or more keys for the specified value.
- Parameters:
value – The specified value.
- Returns:
True
if this map contains an entry for the specified value,False
otherwise.
- entry_set() Future[List[Tuple[KeyType, ValueType]]] [source]¶
Returns a List clone of the mappings contained in this map.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
The list of key-value tuples in the map.
- get(key: KeyType) Future[Optional[ValueType]] [source]¶
- Returns the value for the specified key, or
None
if this map does not contain this key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
- Returns:
The value associated with the specified key.
- Returns the value for the specified key, or
- is_empty() Future[bool] [source]¶
Returns
True
if this map contains no key-value mappings.- Returns:
True
if this map contains no key-value mappings.
- key_set() Future[List[KeyType]] [source]¶
Returns the list of keys in the ReplicatedMap.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
A list of the clone of the keys.
- put(key: KeyType, value: ValueType, ttl: float = 0) Future[Optional[ValueType]] [source]¶
Associates the specified value with the specified key in this map.
If the map previously contained a mapping for the key, the old value is replaced by the specified value. If ttl is provided, entry will expire and get evicted after the ttl.
- Parameters:
key – The specified key.
value – The value to associate with the key.
ttl – Maximum time in seconds for this entry to stay, if not provided, the value configured on server side configuration will be used.
- Returns:
Previous value associated with key or
None
if there was no mapping for key.
- put_all(source: Dict[KeyType, ValueType]) Future[None] [source]¶
Copies all the mappings from the specified map to this map.
No atomicity guarantees are given. In the case of a failure, some key-value tuples may get written, while others are not.
- Parameters:
source – Map which includes mappings to be stored in this map.
- remove(key: KeyType) Future[Optional[ValueType]] [source]¶
Removes the mapping for a key from this map if it is present.
The map will not contain a mapping for the specified key once the call returns.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – Key of the mapping to be deleted.
- Returns:
The previous value associated with key, or
None
if there was no mapping for key.
- remove_entry_listener(registration_id: str) Future[bool] [source]¶
Removes the specified entry listener.
Returns silently if there is no such listener added before.
- Parameters:
registration_id – Id of registered listener.
- Returns:
True
if registration is removed,False
otherwise.
- size() Future[int] [source]¶
Returns the number of entries in this multimap.
- Returns:
Number of entries in this multimap.
- values() Future[List[ValueType]] [source]¶
Returns the list of values in the map.
Warning
The returned list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
The list of values in the map.
- blocking() BlockingReplicatedMap[KeyType, ValueType] [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingReplicatedMap(wrapped: ReplicatedMap[KeyType, ValueType])[source]¶
Bases:
ReplicatedMap
[KeyType
,ValueType
]- name¶
- service_name¶
- add_entry_listener(key: Optional[KeyType] = None, predicate: Optional[Predicate] = None, added_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, removed_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, updated_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, evicted_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None, clear_all_func: Optional[Callable[[EntryEvent[KeyType, ValueType]], None]] = None) str [source]¶
Adds a continuous entry listener for this map.
Listener will get notified for map events filtered with given parameters.
- Parameters:
key – Key for filtering the events.
predicate – Predicate for filtering the events.
added_func – Function to be called when an entry is added to map.
removed_func – Function to be called when an entry is removed from map.
updated_func – Function to be called when an entry is updated.
evicted_func – Function to be called when an entry is evicted from map.
clear_all_func – Function to be called when entries are cleared from map.
- Returns:
A registration id which is used as a key to remove the listener.
- contains_key(key: KeyType) bool [source]¶
Determines whether this map contains an entry with the key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
- Returns:
True
if this map contains an entry for the specified key,False
otherwise.
- contains_value(value: ValueType) bool [source]¶
Determines whether this map contains one or more keys for the specified value.
- Parameters:
value – The specified value.
- Returns:
True
if this map contains an entry for the specified value,False
otherwise.
- entry_set() List[Tuple[KeyType, ValueType]] [source]¶
Returns a List clone of the mappings contained in this map.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
The list of key-value tuples in the map.
- get(key: KeyType) Optional[ValueType] [source]¶
- Returns the value for the specified key, or
None
if this map does not contain this key.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – The specified key.
- Returns:
The value associated with the specified key.
- Returns the value for the specified key, or
- is_empty() bool [source]¶
Returns
True
if this map contains no key-value mappings.- Returns:
True
if this map contains no key-value mappings.
- key_set() List[KeyType] [source]¶
Returns the list of keys in the ReplicatedMap.
Warning
The list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
A list of the clone of the keys.
- put(key: KeyType, value: ValueType, ttl: float = 0) Optional[ValueType] [source]¶
Associates the specified value with the specified key in this map.
If the map previously contained a mapping for the key, the old value is replaced by the specified value. If ttl is provided, entry will expire and get evicted after the ttl.
- Parameters:
key – The specified key.
value – The value to associate with the key.
ttl – Maximum time in seconds for this entry to stay, if not provided, the value configured on server side configuration will be used.
- Returns:
Previous value associated with key or
None
if there was no mapping for key.
- put_all(source: Dict[KeyType, ValueType]) None [source]¶
Copies all the mappings from the specified map to this map.
No atomicity guarantees are given. In the case of a failure, some key-value tuples may get written, while others are not.
- Parameters:
source – Map which includes mappings to be stored in this map.
- remove(key: KeyType) Optional[ValueType] [source]¶
Removes the mapping for a key from this map if it is present.
The map will not contain a mapping for the specified key once the call returns.
Warning
This method uses
__hash__
and__eq__
methods of binary form of the key, not the actual implementations of__hash__
and__eq__
defined in key’s class.- Parameters:
key – Key of the mapping to be deleted.
- Returns:
The previous value associated with key, or
None
if there was no mapping for key.
- remove_entry_listener(registration_id: str) bool [source]¶
Removes the specified entry listener.
Returns silently if there is no such listener added before.
- Parameters:
registration_id – Id of registered listener.
- Returns:
True
if registration is removed,False
otherwise.
- size() int [source]¶
Returns the number of entries in this multimap.
- Returns:
Number of entries in this multimap.
- values() List[ValueType] [source]¶
Returns the list of values in the map.
Warning
The returned list is NOT backed by the map, so changes to the map are NOT reflected in the list, and vice-versa.
- Returns:
The list of values in the map.
- destroy() bool [source]¶
Destroys this proxy.
- Returns:
True
if this proxy is destroyed successfully,False
otherwise.
- blocking() BlockingReplicatedMap[KeyType, ValueType] [source]¶
Returns a version of this proxy with only blocking method calls.
RingBuffer¶
- OVERFLOW_POLICY_OVERWRITE = 0¶
Configuration property for DEFAULT overflow policy. When an item is tried to be added on full Ringbuffer, oldest item in the Ringbuffer is overwritten and item is added.
- OVERFLOW_POLICY_FAIL = 1¶
Configuration property for overflow policy. When an item is tried to be added on full Ringbuffer, the call fails and item is not added.
The reason that FAIL exist is to give the opportunity to obey the ttl. If blocking behavior is required, this can be implemented using retrying in combination with an exponential backoff.
>>> sleepMS = 100; >>> while true: >>> result = ringbuffer.add(item, -1) >>> if result != -1: >>> break >>> sleep(sleepMS / 1000) >>> sleepMS *= 2
- MAX_BATCH_SIZE = 1000¶
The maximum number of items to be added to RingBuffer or read from RingBuffer at a time.
- class ReadResult(read_count, next_seq, item_seqs, items)[source]¶
Bases:
Sequence
Defines the result of a
Ringbuffer.read_many()
operation.- SEQUENCE_UNAVAILABLE = -1¶
Value returned from methods returning a sequence number when the information is not available (e.g. because of rolling upgrade and some members not returning the sequence).
- property read_count: int¶
The number of items that have been read before filtering.
If no filter is set, then the
read_count
will be equal tosize
.But if a filter is applied, it could be that items are read, but are filtered out. So, if you are trying to make another read based on this, then you should increment the sequence by
read_count
and not bysize
.Otherwise, you will be re-reading the same filtered messages.
- property size: int¶
The result set size.
See also
- property next_sequence_to_read_from: int¶
The sequence of the item following the last read item.
This sequence can then be used to read items following the ones returned by this result set.
Usually this sequence is equal to the sequence used to retrieve this result set incremented by the
read_count
. In cases when the reader tolerates lost items, this is not the case.For instance, if the reader requests an item with a stale sequence (one which has already been overwritten), the read will jump to the oldest sequence and read from there.
Similarly, if the reader requests an item in the future (e.g. because the partition was lost and the reader was unaware of this), the read method will jump back to the newest available sequence.
Because of these jumps and only in the case when the reader is loss tolerant, the next sequence must be retrieved using this method. A return value of
SEQUENCE_UNAVAILABLE
means that the information is not available.
- class Ringbuffer(service_name, name, context)[source]¶
Bases:
PartitionSpecificProxy
[BlockingRingbuffer
],Generic
[ItemType
]A Ringbuffer is an append-only data-structure where the content is stored in a ring like structure.
A ringbuffer has a capacity so it won’t grow beyond that capacity and endanger the stability of the system. If that capacity is exceeded, than the oldest item in the ringbuffer is overwritten. The ringbuffer has two always incrementing sequences:
tail_sequence()
: This is the side where the youngest item is found. So the tail is the side of the ringbuffer where items are added to.head_sequence()
: This is the side where the oldest items are found. So the head is the side where items gets discarded.
The items in the ringbuffer can be found by a sequence that is in between (inclusive) the head and tail sequence.
If data is read from a ringbuffer with a sequence that is smaller than the head sequence, it means that the data is not available anymore and a
hazelcast.errors.StaleSequenceError
is thrown.A Ringbuffer currently is a replicated, but not partitioned data structure. So all data is stored in a single partition, similarly to the
hazelcast.proxy.queue.Queue
implementation.A Ringbuffer can be used in a way similar to the Queue, but one of the key differences is that a
hazelcast.proxy.queue.Queue.take()
is destructive, meaning that only 1 thread is able to take an item. Aread_one()
is not destructive, so you can have multiple threads reading the same item multiple times.- capacity() Future[int] [source]¶
Returns the capacity of this Ringbuffer.
- Returns:
The capacity of Ringbuffer.
- size() Future[int] [source]¶
Returns number of items in the Ringbuffer.
- Returns:
The size of Ringbuffer.
- tail_sequence() Future[int] [source]¶
Returns the sequence of the tail.
The tail is the side of the Ringbuffer where the items are added to. The initial value of the tail is
-1
.- Returns:
The sequence of the tail.
- head_sequence() Future[int] [source]¶
Returns the sequence of the head.
The head is the side of the Ringbuffer where the oldest items in the Ringbuffer are found. If the Ringbuffer is empty, the head will be one more than the tail. The initial value of the head is
0
(1
more than tail).- Returns:
The sequence of the head.
- remaining_capacity() Future[int] [source]¶
Returns the remaining capacity of the Ringbuffer.
- Returns:
The remaining capacity of Ringbuffer.
- add(item, overflow_policy: int = 0) Future[int] [source]¶
Adds the specified item to the tail of the Ringbuffer.
If there is no space in the Ringbuffer, the action is determined by
overflow_policy
.- Parameters:
item – The specified item to be added.
overflow_policy – the OverflowPolicy to be used when there is no space.
- Returns:
The sequenceId of the added item, or
-1
if the add failed.
- add_all(items: Sequence[ItemType], overflow_policy: int = 0) Future[int] [source]¶
Adds all of the item in the specified collection to the tail of the Ringbuffer.
This is likely to outperform multiple calls to
add()
due to better io utilization and a reduced number of executed operations. The items are added in the order of the Iterator of the collection.If there is no space in the Ringbuffer, the action is determined by
overflow_policy
.- Parameters:
items – The specified collection which contains the items to be added.
overflow_policy – The OverflowPolicy to be used when there is no space.
- Returns:
The sequenceId of the last written item, or
-1
of the last write is failed.
- read_one(sequence: int) Future[ItemType] [source]¶
Reads one item from the Ringbuffer.
If the sequence is one beyond the current tail, this call blocks until an item is added. Currently it isn’t possible to control how long this call is going to block.
- Parameters:
sequence – The sequence of the item to read.
- Returns:
The read item.
- read_many(start_sequence: int, min_count: int, max_count: int, filter: Optional[Any] = None) Future[ReadResult] [source]¶
Reads a batch of items from the Ringbuffer.
If the number of available items after the first read item is smaller than the
max_count
, these items are returned. So it could be the number of items read is smaller than themax_count
. If there are less items available thanmin_count
, then this call blocks.Warning
These blocking calls consume server memory and if there are many calls, it can be possible to see leaking memory or
OutOfMemoryError
s on the server.Reading a batch of items is likely to perform better because less overhead is involved.
A filter can be provided to only select items that need to be read. If the filter is
None
, all items are read. If the filter is notNone
, only items where the filter function returns true are returned. Using filters is a good way to prevent getting items that are of no value to the receiver. This reduces the amount of IO and the number of operations being executed, and can result in a significant performance improvement. Note that, filtering logic must be defined on the server-side.If the
start_sequence
is smaller than the smallest sequence still available in the Ringbuffer (head_sequence()
), then the smallest available sequence will be used as the start sequence and the minimum/maximum number of items will be attempted to be read from there on.If the
start_sequence
is bigger than the last available sequence in the Ringbuffer (tail_sequence()
), then the last available sequence plus one will be used as the start sequence and the call will block until further items become available and it can read at least the minimum number of items.- Parameters:
start_sequence – The start sequence of the first item to read.
min_count – The minimum number of items to read.
max_count – The maximum number of items to read.
filter – Filter to select returned elements.
- Returns:
The list of read items.
- blocking() BlockingRingbuffer[ItemType] [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingRingbuffer(wrapped: Ringbuffer[ItemType])[source]¶
Bases:
Ringbuffer
[ItemType
]- name¶
- service_name¶
- capacity() int [source]¶
Returns the capacity of this Ringbuffer.
- Returns:
The capacity of Ringbuffer.
- tail_sequence() int [source]¶
Returns the sequence of the tail.
The tail is the side of the Ringbuffer where the items are added to. The initial value of the tail is
-1
.- Returns:
The sequence of the tail.
- head_sequence() int [source]¶
Returns the sequence of the head.
The head is the side of the Ringbuffer where the oldest items in the Ringbuffer are found. If the Ringbuffer is empty, the head will be one more than the tail. The initial value of the head is
0
(1
more than tail).- Returns:
The sequence of the head.
- remaining_capacity() int [source]¶
Returns the remaining capacity of the Ringbuffer.
- Returns:
The remaining capacity of Ringbuffer.
- add(item, overflow_policy: int = 0) int [source]¶
Adds the specified item to the tail of the Ringbuffer.
If there is no space in the Ringbuffer, the action is determined by
overflow_policy
.- Parameters:
item – The specified item to be added.
overflow_policy – the OverflowPolicy to be used when there is no space.
- Returns:
The sequenceId of the added item, or
-1
if the add failed.
- add_all(items: Sequence[ItemType], overflow_policy: int = 0) int [source]¶
Adds all of the item in the specified collection to the tail of the Ringbuffer.
This is likely to outperform multiple calls to
add()
due to better io utilization and a reduced number of executed operations. The items are added in the order of the Iterator of the collection.If there is no space in the Ringbuffer, the action is determined by
overflow_policy
.- Parameters:
items – The specified collection which contains the items to be added.
overflow_policy – The OverflowPolicy to be used when there is no space.
- Returns:
The sequenceId of the last written item, or
-1
of the last write is failed.
- read_one(sequence: int) ItemType [source]¶
Reads one item from the Ringbuffer.
If the sequence is one beyond the current tail, this call blocks until an item is added. Currently it isn’t possible to control how long this call is going to block.
- Parameters:
sequence – The sequence of the item to read.
- Returns:
The read item.
- read_many(start_sequence: int, min_count: int, max_count: int, filter: Optional[Any] = None) ReadResult [source]¶
Reads a batch of items from the Ringbuffer.
If the number of available items after the first read item is smaller than the
max_count
, these items are returned. So it could be the number of items read is smaller than themax_count
. If there are less items available thanmin_count
, then this call blocks.Warning
These blocking calls consume server memory and if there are many calls, it can be possible to see leaking memory or
OutOfMemoryError
s on the server.Reading a batch of items is likely to perform better because less overhead is involved.
A filter can be provided to only select items that need to be read. If the filter is
None
, all items are read. If the filter is notNone
, only items where the filter function returns true are returned. Using filters is a good way to prevent getting items that are of no value to the receiver. This reduces the amount of IO and the number of operations being executed, and can result in a significant performance improvement. Note that, filtering logic must be defined on the server-side.If the
start_sequence
is smaller than the smallest sequence still available in the Ringbuffer (head_sequence()
), then the smallest available sequence will be used as the start sequence and the minimum/maximum number of items will be attempted to be read from there on.If the
start_sequence
is bigger than the last available sequence in the Ringbuffer (tail_sequence()
), then the last available sequence plus one will be used as the start sequence and the call will block until further items become available and it can read at least the minimum number of items.- Parameters:
start_sequence – The start sequence of the first item to read.
min_count – The minimum number of items to read.
max_count – The maximum number of items to read.
filter – Filter to select returned elements.
- Returns:
The list of read items.
- destroy() bool [source]¶
Destroys this proxy.
- Returns:
True
if this proxy is destroyed successfully,False
otherwise.
- blocking() BlockingRingbuffer[ItemType] [source]¶
Returns a version of this proxy with only blocking method calls.
Set¶
- class Set(service_name, name, context)[source]¶
Bases:
PartitionSpecificProxy
,Generic
[ItemType
]Concurrent, distributed implementation of Set
- add(item: ItemType) Future[bool] [source]¶
Adds the specified item if it is not exists in this set.
- Parameters:
item – The specified item to be added.
- Returns:
True
if this set is changed after call,False
otherwise.
- add_all(items: Sequence[ItemType]) Future[bool] [source]¶
Adds the elements in the specified collection if they’re not exist in this set.
- Parameters:
items – Collection which includes the items to be added.
- Returns:
True
if this set is changed after call,False
otherwise.
- add_listener(include_value: bool = False, item_added_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None, item_removed_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None) Future[str] [source]¶
Adds an item listener for this container.
Listener will be notified for all container add/remove events.
- Parameters:
include_value – Whether received events include the updated item or not.
item_added_func – Function to be called when an item is added to this set.
item_removed_func – Function to be called when an item is deleted from this set.
- Returns:
A registration id which is used as a key to remove the listener.
- contains(item: ItemType) Future[bool] [source]¶
Determines whether this set contains the specified item or not.
- Parameters:
item – The specified item to be searched.
- Returns:
True
if the specified item exists in this set,False
otherwise.
- contains_all(items: Sequence[ItemType]) Future[bool] [source]¶
Determines whether this set contains all items in the specified collection or not.
- Parameters:
items – The specified collection which includes the items to be searched.
- Returns:
True
if all the items in the specified collection exist in this set,False
otherwise.
- get_all() Future[List[ItemType]] [source]¶
Returns all the items in the set.
- Returns:
List of the items in this set.
- is_empty() Future[bool] [source]¶
Determines whether this set is empty or not.
- Returns:
True
if this set is empty,False
otherwise.
- remove(item: ItemType) Future[bool] [source]¶
Removes the specified element from the set if it exists.
- Parameters:
item – The specified element to be removed.
- Returns:
True
if the specified element exists in this set,False
otherwise.
- remove_all(items: Sequence[ItemType]) Future[bool] [source]¶
Removes all of the elements of the specified collection from this set.
- Parameters:
items – The specified collection.
- Returns:
True
if the call changed this set,False
otherwise.
- remove_listener(registration_id: str) Future[bool] [source]¶
Removes the specified item listener.
Returns silently if the specified listener was not added before.
- Parameters:
registration_id – Id of the listener to be deleted.
- Returns:
True
if the item listener is removed,False
otherwise.
- retain_all(items: Sequence[ItemType]) Future[bool] [source]¶
Removes the items which are not contained in the specified collection.
In other words, only the items that are contained in the specified collection will be retained.
- Parameters:
items – Collection which includes the elements to be retained in this set.
- Returns:
True
if this set changed as a result of the call,False
otherwise.
- size() Future[int] [source]¶
Returns the number of items in this set.
- Returns:
Number of items in this set.
- blocking() BlockingSet[ItemType] [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingSet(wrapped: Set[ItemType])[source]¶
Bases:
Set
[ItemType
]- name¶
- service_name¶
- add(item: ItemType) bool [source]¶
Adds the specified item if it is not exists in this set.
- Parameters:
item – The specified item to be added.
- Returns:
True
if this set is changed after call,False
otherwise.
- add_all(items: Sequence[ItemType]) bool [source]¶
Adds the elements in the specified collection if they’re not exist in this set.
- Parameters:
items – Collection which includes the items to be added.
- Returns:
True
if this set is changed after call,False
otherwise.
- add_listener(include_value: bool = False, item_added_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None, item_removed_func: Optional[Callable[[ItemEvent[ItemType]], None]] = None) str [source]¶
Adds an item listener for this container.
Listener will be notified for all container add/remove events.
- Parameters:
include_value – Whether received events include the updated item or not.
item_added_func – Function to be called when an item is added to this set.
item_removed_func – Function to be called when an item is deleted from this set.
- Returns:
A registration id which is used as a key to remove the listener.
- contains(item: ItemType) bool [source]¶
Determines whether this set contains the specified item or not.
- Parameters:
item – The specified item to be searched.
- Returns:
True
if the specified item exists in this set,False
otherwise.
- contains_all(items: Sequence[ItemType]) bool [source]¶
Determines whether this set contains all items in the specified collection or not.
- Parameters:
items – The specified collection which includes the items to be searched.
- Returns:
True
if all the items in the specified collection exist in this set,False
otherwise.
- get_all() List[ItemType] [source]¶
Returns all the items in the set.
- Returns:
List of the items in this set.
- is_empty() bool [source]¶
Determines whether this set is empty or not.
- Returns:
True
if this set is empty,False
otherwise.
- remove(item: ItemType) bool [source]¶
Removes the specified element from the set if it exists.
- Parameters:
item – The specified element to be removed.
- Returns:
True
if the specified element exists in this set,False
otherwise.
- remove_all(items: Sequence[ItemType]) bool [source]¶
Removes all of the elements of the specified collection from this set.
- Parameters:
items – The specified collection.
- Returns:
True
if the call changed this set,False
otherwise.
- remove_listener(registration_id: str) bool [source]¶
Removes the specified item listener.
Returns silently if the specified listener was not added before.
- Parameters:
registration_id – Id of the listener to be deleted.
- Returns:
True
if the item listener is removed,False
otherwise.
- retain_all(items: Sequence[ItemType]) bool [source]¶
Removes the items which are not contained in the specified collection.
In other words, only the items that are contained in the specified collection will be retained.
- Parameters:
items – Collection which includes the elements to be retained in this set.
- Returns:
True
if this set changed as a result of the call,False
otherwise.
- blocking() BlockingSet[ItemType] [source]¶
Returns a version of this proxy with only blocking method calls.
Topic¶
- class Topic(service_name, name, context)[source]¶
Bases:
PartitionSpecificProxy
[BlockingTopic
],Generic
[MessageType
]Hazelcast provides distribution mechanism for publishing messages that are delivered to multiple subscribers, which is also known as a publish/subscribe (pub/sub) messaging model.
Publish and subscriptions are cluster-wide. When a member subscribes for a topic, it is actually registering for messages published by any member in the cluster, including the new members joined after you added the listener.
Messages are ordered, meaning that listeners(subscribers) will process the messages in the order they are actually published.
- add_listener(on_message: Optional[Callable[[TopicMessage[MessageType]], None]] = None) Future[str] [source]¶
Subscribes to this topic.
When someone publishes a message on this topic,
on_message
function is called if provided.- Parameters:
on_message – Function to be called when a message is published.
- Returns:
A registration id which is used as a key to remove the listener.
- publish(message: MessageType) Future[None] [source]¶
Publishes the message to all subscribers of this topic.
- Parameters:
message – The message to be published.
- publish_all(messages: Sequence[MessageType]) Future[None] [source]¶
Publishes the messages to all subscribers of this topic.
- Parameters:
messages – The messages to be published.
- remove_listener(registration_id: str) Future[bool] [source]¶
Stops receiving messages for the given message listener.
If the given listener already removed, this method does nothing.
- Parameters:
registration_id – Registration id of the listener to be removed.
- Returns:
True
if the listener is removed,False
otherwise.
- blocking() BlockingTopic[MessageType] [source]¶
Returns a version of this proxy with only blocking method calls.
- class BlockingTopic(wrapped: Topic[MessageType])[source]¶
Bases:
Topic
[MessageType
]- name¶
- service_name¶
- add_listener(on_message: Optional[Callable[[TopicMessage[MessageType]], None]] = None) str [source]¶
Subscribes to this topic.
When someone publishes a message on this topic,
on_message
function is called if provided.- Parameters:
on_message – Function to be called when a message is published.
- Returns:
A registration id which is used as a key to remove the listener.
- publish(message: MessageType) None [source]¶
Publishes the message to all subscribers of this topic.
- Parameters:
message – The message to be published.
- publish_all(messages: Sequence[MessageType]) None [source]¶
Publishes the messages to all subscribers of this topic.
- Parameters:
messages – The messages to be published.
- remove_listener(registration_id: str) bool [source]¶
Stops receiving messages for the given message listener.
If the given listener already removed, this method does nothing.
- Parameters:
registration_id – Registration id of the listener to be removed.
- Returns:
True
if the listener is removed,False
otherwise.
- destroy() bool [source]¶
Destroys this proxy.
- Returns:
True
if this proxy is destroyed successfully,False
otherwise.
- blocking() BlockingTopic[MessageType] [source]¶
Returns a version of this proxy with only blocking method calls.
TransactionalList¶
- class TransactionalList(name, transaction, context)[source]¶
Bases:
TransactionalProxy
,Generic
[ItemType
]Transactional implementation of
List
.- add(item: ItemType) bool [source]¶
Transactional implementation of
List.add(item)
- Parameters:
item – The new item to be added.
- Returns:
True
if the item is added successfully,False
otherwise.
- remove(item: ItemType) bool [source]¶
Transactional implementation of
List.remove(item)
- Parameters:
item – The specified item to be removed.
- Returns:
True
if the item is removed successfully,False
otherwise.
- size() int [source]¶
Transactional implementation of
List.size()
- Returns:
The size of the list.
TransactionalMap¶
- class TransactionalMap(name, transaction, context)[source]¶
Bases:
TransactionalProxy
,Generic
[KeyType
,ValueType
]Transactional implementation of
Map
.- contains_key(key: KeyType) bool [source]¶
Transactional implementation of
Map.contains_key(key)
- Parameters:
key – The specified key.
- Returns:
True
if this map contains an entry for the specified key,False
otherwise.
- get(key: KeyType) Optional[ValueType] [source]¶
Transactional implementation of
Map.get(key)
- Parameters:
key – The specified key.
- Returns:
The value for the specified key.
- get_for_update(key: KeyType) Optional[ValueType] [source]¶
Locks the key and then gets and returns the value to which the specified key is mapped.
Lock will be released at the end of the transaction (either commit or rollback).
- Parameters:
key – The specified key.
- Returns:
The value for the specified key.
See also
- size() int [source]¶
Transactional implementation of
Map.size()
- Returns:
Number of entries in this map.
- is_empty() bool [source]¶
Transactional implementation of
Map.is_empty()
- Returns:
True
if this map contains no key-value mappings,False
otherwise.
- put(key: KeyType, value: ValueType, ttl: Optional[float] = None) Optional[ValueType] [source]¶
Transactional implementation of
Map.put(key, value, ttl)
The object to be put will be accessible only in the current transaction context till the transaction is committed.
- Parameters:
key – The specified key.
value – The value to associate with the key.
ttl – Maximum time in seconds for this entry to stay.
- Returns:
Previous value associated with key or
None
if there was no mapping for key.
- put_if_absent(key: KeyType, value: ValueType) Optional[ValueType] [source]¶
Transactional implementation of
Map.put_if_absent(key, value)
The object to be put will be accessible only in the current transaction context till the transaction is committed.
- Parameters:
key – Key of the entry.
value – Value of the entry.
- Returns:
Old value of the entry.
- set(key: KeyType, value: ValueType) None [source]¶
Transactional implementation of
Map.set(key, value)
The object to be set will be accessible only in the current transaction context till the transaction is committed.
- Parameters:
key – Key of the entry.
value – Value of the entry.
- replace(key: KeyType, value: ValueType) Optional[ValueType] [source]¶
Transactional implementation of
Map.replace(key, value)
The object to be replaced will be accessible only in the current transaction context till the transaction is committed.
- Parameters:
key – The specified key.
value – The value to replace the previous value.
- Returns:
Previous value associated with key, or
None
if there was no mapping for key.
- replace_if_same(key: KeyType, old_value: ValueType, new_value: ValueType) bool [source]¶
Transactional implementation of
Map.replace_if_same(key, old_value, new_value)
The object to be replaced will be accessible only in the current transaction context till the transaction is committed.
- Parameters:
key – The specified key.
old_value – Replace the key value if it is the old value.
new_value – The new value to replace the old value.
- Returns:
True
if the value was replaced,False
otherwise.
- remove(key: KeyType) Optional[ValueType] [source]¶
Transactional implementation of
Map.remove(key)
The object to be removed will be removed from only the current transaction context until the transaction is committed.
- Parameters:
key – Key of the mapping to be deleted.
- Returns:
The previous value associated with key, or
None
if there was no mapping for key.
- remove_if_same(key: KeyType, value: ValueType) bool [source]¶
Transactional implementation of
Map.remove_if_same(key, value)
The object to be removed will be removed from only the current transaction context until the transaction is committed.
- Parameters:
key – The specified key.
value – Remove the key if it has this value.
- Returns:
True
if the value was removed,False
otherwise.
- delete(key: KeyType) None [source]¶
Transactional implementation of
Map.delete(key)
The object to be deleted will be removed from only the current transaction context until the transaction is committed.
- Parameters:
key – Key of the mapping to be deleted.
- key_set(predicate: Optional[Predicate] = None) List[KeyType] [source]¶
Transactional implementation of
Map.key_set(predicate)
- Parameters:
predicate – Predicate to filter the entries.
- Returns:
A list of the clone of the keys.
- values(predicate: Optional[Predicate] = None) List[ValueType] [source]¶
Transactional implementation of
Map.values(predicate)
- Parameters:
predicate – Predicate to filter the entries.
- Returns:
A list of clone of the values contained in this map.
TransactionalMultiMap¶
- class TransactionalMultiMap(name, transaction, context)[source]¶
Bases:
TransactionalProxy
,Generic
[KeyType
,ValueType
]Transactional implementation of
MultiMap
.- put(key: KeyType, value: ValueType) bool [source]¶
Transactional implementation of
MultiMap.put(key, value)
- Parameters:
key – The key to be stored.
value – The value to be stored.
- Returns:
True
if the size of the multimap is increased,False
if the multimap already contains the key-value tuple.
- get(key: KeyType) Optional[List[ValueType]] [source]¶
Transactional implementation of
MultiMap.get(key)
- Parameters:
key – The key whose associated values are returned.
- Returns:
The collection of the values associated with the key.
- remove(key: KeyType, value: ValueType) bool [source]¶
Transactional implementation of
MultiMap.remove(key, value)
- Parameters:
key – The key of the entry to remove.
value – The value of the entry to remove.
- Returns:
True
if the item is removed,False
otherwise.
- remove_all(key: KeyType) List[ValueType] [source]¶
Transactional implementation of
MultiMap.remove_all(key)
- Parameters:
key – The key of the entries to remove.
- Returns:
The collection of the values associated with the key.
- value_count(key: KeyType) int [source]¶
Transactional implementation of
MultiMap.value_count(key)
- Parameters:
key – The key whose number of values is to be returned.
- Returns:
The number of values matching the given key in the multimap.
- size() int [source]¶
Transactional implementation of
MultiMap.size()
- Returns:
The number of key-value tuples in the multimap.
TransactionalQueue¶
- class TransactionalQueue(name, transaction, context)[source]¶
Bases:
TransactionalProxy
,Generic
[ItemType
]Transactional implementation of
Queue
.- offer(item: ItemType, timeout: float = 0) bool [source]¶
Transactional implementation of
Queue.offer(item, timeout)
- Parameters:
item – The item to be added.
timeout – Maximum time in seconds to wait for addition.
- Returns:
True
if the element was added to this queue,False
otherwise.
- take() ItemType [source]¶
Transactional implementation of
Queue.take()
- Returns:
The head of this queue.
- poll(timeout: float = 0) Optional[ItemType] [source]¶
Transactional implementation of
Queue.poll(timeout)
- Parameters:
timeout – Maximum time in seconds to wait for addition.
- Returns:
The head of this queue, or
None
if this queue is empty or specified timeout elapses before an item is added to the queue.
- peek(timeout: float = 0) Optional[ItemType] [source]¶
Transactional implementation of
Queue.peek(timeout)
- Parameters:
timeout – Maximum time in seconds to wait for addition.
- Returns:
The head of this queue, or
None
if this queue is empty or specified timeout elapses before an item is added to the queue.
- size() int [source]¶
Transactional implementation of
Queue.size()
- Returns:
Size of the queue.
TransactionalSet¶
- class TransactionalSet(name, transaction, context)[source]¶
Bases:
TransactionalProxy
,Generic
[ItemType
]Transactional implementation of
Set
.- add(item: ItemType) bool [source]¶
Transactional implementation of
Set.add(item)
- Parameters:
item – The new item to be added.
- Returns:
True
if item is added successfully,False
otherwise.
- remove(item: ItemType) bool [source]¶
Transactional implementation of
Set.remove(item)
- Parameters:
item – The specified item to be deleted.
- Returns:
True
if item is remove successfully,False
otherwise.
- size() int [source]¶
Transactional implementation of
Set.size()
- Returns:
Size of the set.
Security¶
- class BasicTokenProvider(token: Union[str, bytes] = '')[source]¶
Bases:
TokenProvider
BasicTokenProvider sends the given token to the authentication endpoint.
Serialization¶
User API for Serialization.
- class ObjectDataOutput[source]¶
Bases:
object
ObjectDataOutput provides an interface to convert any of primitive types or arrays of them to series of bytes and write them on a stream.
- write_from(buff: bytearray, offset: Optional[int] = None, length: Optional[int] = None) None [source]¶
Writes the content of the buffer to this output stream.
- Parameters:
buff – Input buffer.
offset – Offset of the buffer where copy begin.
length – Length of data to be copied from the offset into stream.
- write_boolean(val: bool) None [source]¶
Writes a bool value to this output stream.
Single byte value 1 represent True, 0 represent False
- Parameters:
val – The bool to be written.
- write_byte(val: int) None [source]¶
Writes a byte value to this output stream.
- Parameters:
val – The byte value to be written.
- write_short(val: int) None [source]¶
Writes a short value to this output stream.
- Parameters:
val – The short value to be written.
- write_char(val: str) None [source]¶
Writes a char value to this output stream.
- Parameters:
val – The char value to be written.
- write_int(val: int) None [source]¶
Writes an int value to this output stream.
- Parameters:
val – The int value to be written.
- write_long(val: int) None [source]¶
Writes a long value to this output stream.
- Parameters:
val – The long value to be written.
- write_float(val: float) None [source]¶
Writes a float value to this output stream.
- Parameters:
val – The float value to be written.
- write_double(val: float) None [source]¶
Writes a double value to this output stream.
- Parameters:
val – The double value to be written.
- write_bytes(val: str) None [source]¶
Writes a string to this output stream.
- Parameters:
val – The string to be written.
- write_chars(val: str) None [source]¶
Writes every character of string to this output stream.
- Parameters:
val – The string to be written.
- write_string(val: str) None [source]¶
Writes UTF-8 string to this output stream.
- Parameters:
val – The UTF-8 string to be written.
- write_utf(val: str) None [source]¶
Writes UTF-8 string to this output stream.
Deprecated since version 4.1: This method is deprecated and will be removed in the next major version. Use
write_string()
instead.- Parameters:
val – The UTF-8 string to be written.
- write_byte_array(val: bytearray) None [source]¶
Writes a byte array to this output stream.
- Parameters:
val – The byte array to be written.
- write_boolean_array(val: Sequence[bool]) None [source]¶
Writes a bool array to this output stream.
- Parameters:
val – The bool array to be written.
- write_char_array(val: Sequence[str]) None [source]¶
Writes a char array to this output stream.
- Parameters:
val – The char array to be written.
- write_int_array(val: Sequence[int]) None [source]¶
Writes a int array to this output stream.
- Parameters:
val – The int array to be written.
- write_long_array(val: Sequence[int]) None [source]¶
Writes a long array to this output stream.
- Parameters:
val – The long array to be written.
- write_double_array(val: Sequence[float]) None [source]¶
Writes a double array to this output stream.
- Parameters:
val – The double array to be written.
- write_float_array(val: Sequence[float]) None [source]¶
Writes a float array to this output stream.
- Parameters:
val – The float array to be written.
- write_short_array(val: Sequence[int]) None [source]¶
Writes a short array to this output stream.
- Parameters:
val – The short array to be written.
- write_string_array(val: Sequence[str]) None [source]¶
Writes a UTF-8 String array to this output stream.
- Parameters:
val – The UTF-8 String array to be written.
- write_utf_array(val: Sequence[str]) None [source]¶
Writes a UTF-8 String array to this output stream.
Deprecated since version 4.1: This method is deprecated and will be removed in the next major version. Use
write_string_array()
instead.- Parameters:
val – The UTF-8 String array to be written.
- write_object(val: Any) None [source]¶
Writes an object to this output stream.
- Parameters:
val – The object to be written.
- class ObjectDataInput[source]¶
Bases:
object
ObjectDataInput provides an interface to read bytes from a stream and reconstruct it to any of primitive types or arrays of them.
- read_into(buff: bytearray, offset: Optional[int] = None, length: Optional[int] = None) bytearray [source]¶
Reads the content of the buffer into an array of bytes.
- Parameters:
buff – Input buffer.
offset – Offset of the buffer where the read begin.
length – Length of data to be read.
- Returns:
The read data.
- skip_bytes(count: int) int [source]¶
Skips over given number of bytes from input stream.
- Parameters:
count – Number of bytes to be skipped.
- Returns:
The actual number of bytes skipped.
- read_boolean() bool [source]¶
Reads 1 byte from input stream and convert it to a bool value.
- Returns:
The bool value read.
- read_byte() int [source]¶
Reads 1 byte from input stream and returns it.
- Returns:
The byte value read.
- read_unsigned_byte() int [source]¶
Reads 1 byte from input stream, zero-extends it and returns.
- Returns:
The unsigned byte value read.
- read_short() int [source]¶
Reads 2 bytes from input stream and returns a short value.
- Returns:
The short value read.
- read_unsigned_short() int [source]¶
Reads 2 bytes from input stream and returns an int value.
- Returns:
The unsigned short value read.
- read_char() str [source]¶
Reads 2 bytes from the input stream and returns a str value.
- Returns:
The char value read.
- read_int() int [source]¶
Reads 4 bytes from input stream and returns an int value.
- Returns:
The int value read.
- read_long() int [source]¶
Reads 8 bytes from input stream and returns a long value.
- Returns:
The int value read.
- read_float() float [source]¶
Reads 4 bytes from input stream and returns a float value.
- Returns:
The float value read.
- read_double() float [source]¶
Reads 8 bytes from input stream and returns a double value.
- Returns:
The double value read.
- read_string() str [source]¶
Reads a UTF-8 string from input stream and returns it.
- Returns:
The UTF-8 string read.
- read_utf() str [source]¶
Reads a UTF-8 string from input stream and returns it.
Deprecated since version 4.1: This method is deprecated and will be removed in the next major version. Use
read_string()
instead.- Returns:
The UTF-8 string read.
- read_byte_array() bytearray [source]¶
Reads a byte array from input stream and returns it.
- Returns:
The byte array read.
- read_boolean_array() List[bool] [source]¶
Reads a bool array from input stream and returns it.
- Returns:
The bool array read.
- read_char_array() List[str] [source]¶
Reads a char array from input stream and returns it.
- Returns:
The char array read.
- read_int_array() List[int] [source]¶
Reads a int array from input stream and returns it.
- Returns:
The int array read.
- read_long_array() List[int] [source]¶
Reads a long array from input stream and returns it.
- Returns:
The long array read.
- read_double_array() List[float] [source]¶
Reads a double array from input stream and returns it.
- Returns:
The double array read.
- read_float_array() List[float] [source]¶
Reads a float array from input stream and returns it.
- Returns:
The float array read.
- read_short_array() List[int] [source]¶
Reads a short array from input stream and returns it.
- Returns:
The short array read.
- read_string_array() List[str] [source]¶
Reads a UTF-8 string array from input stream and returns it.
- Returns:
The UTF-8 string array read.
- read_utf_array() List[str] [source]¶
Reads a UTF-8 string array from input stream and returns it.
Deprecated since version 4.1: This method is deprecated and will be removed in the next major version. Use
read_string_array()
instead.- Returns:
The UTF-8 string array read.
- class IdentifiedDataSerializable[source]¶
Bases:
object
IdentifiedDataSerializable is an alternative serialization method to Python pickle, which also avoids reflection during de-serialization.
Each IdentifiedDataSerializable is created by a registered DataSerializableFactory.
- write_data(object_data_output: ObjectDataOutput) None [source]¶
Writes object fields to output stream.
- Parameters:
object_data_output – The output.
- read_data(object_data_input: ObjectDataInput) None [source]¶
Reads fields from the input stream.
- Parameters:
object_data_input – The input.
- class Portable[source]¶
Bases:
object
Portable provides an alternative serialization method.
Instead of relying on reflection, each Portable is created by a registered PortableFactory. Portable serialization has the following advantages:
Support multiversion of the same object type.
Fetching individual fields without having to rely on reflection.
Querying and indexing support without de-serialization and/or reflection.
- write_portable(writer: PortableWriter) None [source]¶
Serialize this portable object using given PortableWriter.
- Parameters:
writer – The PortableWriter.
- read_portable(reader: PortableReader) None [source]¶
Read portable fields using PortableReader.
- Parameters:
reader – The PortableReader.
- class StreamSerializer[source]¶
Bases:
object
A base class for custom serialization.
- write(out: ObjectDataOutput, obj: Any) None [source]¶
Writes object to ObjectDataOutput
- Parameters:
out – Stream that object will be written to.
obj – The object to be written.
- read(inp: ObjectDataInput) Any [source]¶
Reads object from objectDataInputStream
- Parameters:
inp – Stream that object will read from.
- Returns:
The read object.
- class PortableReader[source]¶
Bases:
object
Provides a mean of reading portable fields from binary in form of Python primitives and arrays of these primitives, nested portable fields and array of portable fields.
- get_version() int [source]¶
Returns the global version of portable classes.
- Returns:
Global version of portable classes.
- has_field(field_name: str) bool [source]¶
Determine whether the field name exists in this portable class or not.
- Parameters:
field_name – name of the field (does not support nested paths).
- Returns:
True
if the field name exists in class,False
otherwise.
- get_field_names() Set[str] [source]¶
Returns the set of field names on this portable class.
- Returns:
Set of field names on this portable class.
- get_field_type(field_name: str) FieldType [source]¶
Returns the field type of given field name.
- Parameters:
field_name – Name of the field.
- Returns:
The field type.
- get_field_class_id(field_name: str) int [source]¶
Returns the class id of given field.
- Parameters:
field_name – Name of the field.
- Returns:
Class id of given field.
- read_int(field_name: str) int [source]¶
Reads a primitive int.
- Parameters:
field_name – Name of the field.
- Returns:
The int value read.
- read_long(field_name: str) int [source]¶
Reads a primitive long.
- Parameters:
field_name – Name of the field.
- Returns:
The long value read.
- read_string(field_name: str) str [source]¶
Reads a UTF-8 String.
- Parameters:
field_name – Name of the field.
- Returns:
The UTF-8 String read.
- read_utf(field_name: str) str [source]¶
Reads a UTF-8 String.
Deprecated since version 4.1: This method is deprecated and will be removed in the next major version. Use
read_string()
instead.- Parameters:
field_name – Name of the field.
- Returns:
The UTF-8 String read.
- read_boolean(field_name: str) bool [source]¶
Reads a primitive bool.
- Parameters:
field_name – Name of the field.
- Returns:
The bool value read.
- read_byte(field_name: str) int [source]¶
Reads a primitive byte.
- Parameters:
field_name – Name of the field.
- Returns:
The byte value read.
- read_char(field_name: str) str [source]¶
Reads a primitive char.
- Parameters:
field_name – Name of the field.
- Returns:
The char value read.
- read_double(field_name: str) float [source]¶
Reads a primitive double.
- Parameters:
field_name – Name of the field.
- Returns:
The double value read.
- read_float(field_name: str) float [source]¶
Reads a primitive float.
- Parameters:
field_name – Name of the field.
- Returns:
The float value read.
- read_short(field_name: str) int [source]¶
Reads a primitive short.
- Parameters:
field_name – Name of the field.
- Returns:
The short value read.
- read_portable(field_name: str) Portable [source]¶
Reads a portable.
- Parameters:
field_name – Name of the field.
- Returns:
The portable read.
- read_decimal(field_name: str) Decimal [source]¶
Reads a decimal.
- Parameters:
field_name – Name of the field.
- Returns:
The decimal read.
- read_time(field_name: str) time [source]¶
Reads a time.
- Parameters:
field_name – Name of the field.
- Returns:
The time read.
- read_date(field_name: str) date [source]¶
Reads a date.
- Parameters:
field_name – Name of the field.
- Returns:
The date read.
- read_timestamp(field_name: str) datetime [source]¶
Reads a timestamp.
- Parameters:
field_name – Name of the field.
- Returns:
The timestamp read.
- read_timestamp_with_timezone(field_name: str) datetime [source]¶
Reads a timestamp with timezone.
- Parameters:
field_name – Name of the field.
- Returns:
The timestamp with timezone read.
- read_byte_array(field_name: str) bytearray [source]¶
Reads a primitive byte array.
- Parameters:
field_name – Name of the field.
- Returns:
The byte array read.
- read_boolean_array(field_name: str) List[bool] [source]¶
Reads a primitive bool array.
- Parameters:
field_name – Name of the field.
- Returns:
The bool array read.
- read_char_array(field_name: str) List[str] [source]¶
Reads a primitive char array.
- Parameters:
field_name – Name of the field.
- Returns:
The char array read.
- read_int_array(field_name: str) List[int] [source]¶
Reads a primitive int array.
- Parameters:
field_name – Name of the field.
- Returns:
The int array read.
- read_long_array(field_name: str) List[int] [source]¶
Reads a primitive long array.
- Parameters:
field_name – Name of the field.
- Returns:
The long array read.
- read_double_array(field_name: str) List[float] [source]¶
Reads a primitive double array.
- Parameters:
field_name – Name of the field.
- Returns:
The double array read.
- read_float_array(field_name: str) List[float] [source]¶
Reads a primitive float array.
- Parameters:
field_name – Name of the field.
- Returns:
The float array read.
- read_short_array(field_name: str) List[int] [source]¶
Reads a primitive short array.
- Parameters:
field_name – Name of the field.
- Returns:
The short array read.
- read_string_array(field_name: str) List[str] [source]¶
Reads a UTF-8 String array.
- Parameters:
field_name – Name of the field.
- Returns:
The UTF-8 String array read.
- read_utf_array(field_name: str) List[str] [source]¶
Reads a UTF-8 String array.
Deprecated since version 4.1: This method is deprecated and will be removed in the next major version. Use
read_string_array()
instead.- Parameters:
field_name – Name of the field.
- Returns:
The UTF-8 String array read.
- read_decimal_array(field_name: str) List[Decimal] [source]¶
Reads a decimal array.
- Parameters:
field_name – Name of the field.
- Returns:
The decimal array read.
- read_time_array(field_name: str) List[time] [source]¶
Reads a time array.
- Parameters:
field_name – Name of the field.
- Returns:
The time array read.
- read_date_array(field_name: str) List[date] [source]¶
Reads a date array.
- Parameters:
field_name – Name of the field.
- Returns:
The date array read.
- read_timestamp_array(field_name: str) List[datetime] [source]¶
Reads a timestamp array.
- Parameters:
field_name – Name of the field.
- Returns:
The timestamp array read.
- read_timestamp_with_timezone_array(field_name: str) List[datetime] [source]¶
Reads a timestamp with timezone array.
- Parameters:
field_name – Name of the field.
- Returns:
The timestamp with timezone array read.
- read_portable_array(field_name: str) List[Portable] [source]¶
Reads a portable array.
- Parameters:
field_name – Name of the field.
- Returns:
The portable array read.
- get_raw_data_input() ObjectDataInput [source]¶
After reading portable fields, one can read remaining fields in old fashioned way consecutively from the end of stream. After get_raw_data_input() called, no data can be read.
- Returns:
The input.
- class PortableWriter[source]¶
Bases:
object
Provides a mean of writing portable fields to a binary in form of Python primitives and arrays of these primitives, nested portable fields and array of portable fields.
- write_int(field_name: str, value: int) None [source]¶
Writes a primitive int.
- Parameters:
field_name – Name of the field.
value – Int value to be written.
- write_long(field_name: str, value: int) None [source]¶
Writes a primitive long.
- Parameters:
field_name – Name of the field.
value – Long value to be written.
- write_string(field_name: str, value: str) None [source]¶
Writes an UTF string.
- Parameters:
field_name – Name of the field.
value – UTF string value to be written.
- write_utf(field_name: str, value: str) None [source]¶
Writes an UTF string.
Deprecated since version 4.1: This method is deprecated and will be removed in the next major version. Use
write_string()
instead.- Parameters:
field_name – Name of the field.
value – UTF string value to be written.
- write_boolean(field_name: str, value: bool) None [source]¶
Writes a primitive bool.
- Parameters:
field_name – Name of the field.
value – Bool value to be written.
- write_byte(field_name: str, value: int) None [source]¶
Writes a primitive byte.
- Parameters:
field_name – Name of the field.
value – Byte value to be written.
- write_char(field_name: str, value: str) None [source]¶
Writes a primitive char.
- Parameters:
field_name – Name of the field.
value – Char value to be written.
- write_double(field_name: str, value: float) None [source]¶
Writes a primitive double.
- Parameters:
field_name – Name of the field.
value – Double value to be written.
- write_float(field_name: str, value: float) None [source]¶
Writes a primitive float.
- Parameters:
field_name – Name of the field.
value – Float value to be written.
- write_short(field_name: str, value: int) None [source]¶
Writes a primitive short.
- Parameters:
field_name – Name of the field.
value – Short value to be written.
- write_portable(field_name: str, portable: Portable) None [source]¶
Writes a Portable.
- Parameters:
field_name – Name of the field.
portable – Portable to be written.
- write_null_portable(field_name: str, factory_id: int, class_id: int) None [source]¶
To write a null portable value, user needs to provide class and factory ids of related class.
- Parameters:
field_name – Name of the field.
factory_id – Factory id of related portable class.
class_id – Class id of related portable class.
- write_decimal(field_name: str, value: Decimal) None [source]¶
Writes a decimal.
- Parameters:
field_name – Name of the field.
value – Decimal to be written.
- write_time(field_name: str, value: time) None [source]¶
Writes a time.
- Parameters:
field_name – Name of the field.
value – Time to be written.
- write_date(field_name: str, value: date) None [source]¶
Writes a date.
- Parameters:
field_name – Name of the field.
value – Date to be written.
- write_timestamp(field_name: str, value: datetime) None [source]¶
Writes a timestamp.
- Parameters:
field_name – Name of the field.
value – Timestamp to be written.
- write_timestamp_with_timezone(field_name: str, value: datetime) None [source]¶
Writes a timestamp with timezone.
- Parameters:
field_name – Name of the field.
value – Timestamp with timezone to be written.
- write_byte_array(field_name: str, values: bytearray) None [source]¶
Writes a primitive byte array.
- Parameters:
field_name – Name of the field.
values – Bytearray to be written.
- write_boolean_array(field_name: str, values: Sequence[bool]) None [source]¶
Writes a primitive bool array.
- Parameters:
field_name – Name of the field.
values – Bool array to be written.
- write_char_array(field_name: str, values: Sequence[str]) None [source]¶
Writes a primitive char array.
- Parameters:
field_name – Name of the field.
values – Char array to be written.
- write_int_array(field_name: str, values: Sequence[int]) None [source]¶
Writes a primitive int array.
- Parameters:
field_name – Name of the field.
values – Int array to be written.
- write_long_array(field_name: str, values: Sequence[int]) None [source]¶
Writes a primitive long array.
- Parameters:
field_name – Name of the field.
values – Long array to be written.
- write_double_array(field_name: str, values: Sequence[float]) None [source]¶
Writes a primitive double array.
- Parameters:
field_name – Name of the field.
values – Double array to be written.
- write_float_array(field_name: str, values: Sequence[float]) None [source]¶
Writes a primitive float array.
- Parameters:
field_name – Name of the field.
values – Float array to be written.
- write_short_array(field_name: str, values: Sequence[int]) None [source]¶
Writes a primitive short array.
- Parameters:
field_name – Name of the field.
values – Short array to be written.
- write_string_array(field_name: str, values: Sequence[str]) None [source]¶
Writes a UTF-8 String array.
- Parameters:
field_name – Name of the field.
values – UTF-8 String array to be written.
- write_utf_array(field_name: str, values: Sequence[str]) None [source]¶
Writes a UTF-8 String array.
Deprecated since version 4.1: This method is deprecated and will be removed in the next major version. Use
write_string_array()
instead.- Parameters:
field_name – Name of the field.
values – UTF-8 String array to be written.
- write_decimal_array(field_name: str, values: Sequence[Decimal]) None [source]¶
Writes a decimal array.
- Parameters:
field_name – Name of the field.
values – Decimal array to be written.
- write_time_array(field_name: str, values: Sequence[time]) None [source]¶
Writes a time array.
- Parameters:
field_name – Name of the field.
values – Time array to be written.
- write_date_array(field_name: str, values: Sequence[date]) None [source]¶
Writes a date array.
- Parameters:
field_name – Name of the field.
values – Date array to be written.
- write_timestamp_array(field_name: str, values: Sequence[datetime]) None [source]¶
Writes a timestamp array.
- Parameters:
field_name – Name of the field.
values – Timestamp array to be written.
- write_timestamp_with_timezone_array(field_name: str, values: Sequence[datetime]) None [source]¶
Writes a timestamp with timezone array.
- Parameters:
field_name – Name of the field.
values – Timestamp with timezone array to be written.
- write_portable_array(field_name: str, values: Sequence[Portable]) None [source]¶
Writes a portable array.
- Parameters:
field_name – Name of the field.
values – Portable array to be written.
- get_raw_data_output() ObjectDataOutput [source]¶
After writing portable fields, one can write remaining fields in old fashioned way consecutively at the end of stream. After get_raw_data_output() called, no data can be written.
- Returns:
The output.
- class CompactReader[source]¶
Bases:
ABC
Provides means of reading Compact serialized fields from the binary data.
Read operations might throw
hazelcast.errors.HazelcastSerializationError
when a field with the given name is not found or there is a type mismatch.The way to use CompactReader for class evolution is to check for the existence of a field with its name and kind, with the
get_field_kind()
method. One should read the field if it exists with the given name and kind, and use some other logic, like using a default value, if it does not exist.def read(self, reader: CompactReader) -> Foo: bar = reader.read_int32("bar") # A field that is always present if reader.get_field_kind("baz") == FieldKind.STRING: baz = reader.read_string("baz") else: baz = "" # Use a default value, if the field is not present return Foo(bar, baz)
- abstract get_field_kind(field_name: str) FieldKind [source]¶
Returns the kind of the field for the given name.
If the field with the given name does not exist,
FieldKind.NOT_AVAILABLE
is returned.This method can be used to check the existence of a field, which can be useful when the class is evolved.
- Parameters:
field_name – Name of the field.
- Returns:
Kind of the field.
- abstract read_boolean(field_name: str) bool [source]¶
Reads a boolean.
This method can also read a nullable boolean, as long as it is not
None
.- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema, or a
None
nullable boolean value is read.
- abstract read_nullable_boolean(field_name: str) Optional[bool] [source]¶
Reads a nullable boolean.
This method can also read a non-nullable boolean.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_int8(field_name: str) int [source]¶
Reads an 8-bit two’s complement signed integer.
This method can also read a nullable int8, as long as it is not
None
.- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema, or a
None
nullable int8 value is read.
- abstract read_nullable_int8(field_name: str) Optional[int] [source]¶
Reads a nullable 8-bit two’s complement signed integer.
This method can also read a non-nullable int8.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_int16(field_name: str) int [source]¶
Reads a 16-bit two’s complement signed integer.
This method can also read a nullable int16, as long as it is not
None
.- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema, or a
None
nullable int16 value is read.
- abstract read_nullable_int16(field_name: str) Optional[int] [source]¶
Reads a nullable 16-bit two’s complement signed integer.
This method can also read a non-nullable int16.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_int32(field_name: str) int [source]¶
Reads a 32-bit two’s complement signed integer.
This method can also read a nullable int32, as long as it is not
None
.- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema, or a
None
nullable int32 value is read.
- abstract read_nullable_int32(field_name: str) Optional[int] [source]¶
Reads a nullable 32-bit two’s complement signed integer.
This method can also read a non-nullable int32.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_int64(field_name: str) int [source]¶
Reads a 64-bit two’s complement signed integer.
This method can also read a nullable int64, as long as it is not
None
.- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema, or a
None
nullable int64 value is read.
- abstract read_nullable_int64(field_name: str) Optional[int] [source]¶
Reads a nullable 64-bit two’s complement signed integer.
This method can also read a non-nullable int64.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_float32(field_name: str) float [source]¶
Reads a 32-bit IEEE 754 floating point number.
This method can also read a nullable float32, as long as it is not
None
.- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema, or a
None
nullable float32 value is read.
- abstract read_nullable_float32(field_name: str) Optional[float] [source]¶
Reads a nullable 32-bit IEEE 754 floating point number.
This method can also read a non-nullable float32.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_float64(field_name: str) float [source]¶
Reads a 64-bit IEEE 754 floating point number.
This method can also read a nullable float64, as long as it is not
None
.- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema, or a
None
nullable float64 value is read.
- abstract read_nullable_float64(field_name: str) Optional[float] [source]¶
Reads a nullable 64-bit IEEE 754 floating point number.
This method can also read a non-nullable float64.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_string(field_name: str) Optional[str] [source]¶
Reads a UTF-8 encoded string.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_decimal(field_name: str) Optional[Decimal] [source]¶
Reads an arbitrary precision and scale floating point number.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_time(field_name: str) Optional[time] [source]¶
Reads a time consisting of hour, minute, second, and nanoseconds.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_date(field_name: str) Optional[date] [source]¶
Reads a date consisting of year, month, and day.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_timestamp(field_name: str) Optional[datetime] [source]¶
Reads a timestamp consisting of date and time.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_timestamp_with_timezone(field_name: str) Optional[datetime] [source]¶
Reads a timestamp with timezone consisting of date, time and timezone offset.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_compact(field_name: str) Optional[Any] [source]¶
Reads a compact object.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_boolean(field_name: str) Optional[List[bool]] [source]¶
Reads an array of booleans.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_nullable_boolean(field_name: str) Optional[List[Optional[bool]]] [source]¶
Reads an array of nullable booleans.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_int8(field_name: str) Optional[List[int]] [source]¶
Reads an array of 8-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_nullable_int8(field_name: str) Optional[List[Optional[int]]] [source]¶
Reads an array of nullable 8-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_int16(field_name: str) Optional[List[int]] [source]¶
Reads an array of 16-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_nullable_int16(field_name: str) Optional[List[Optional[int]]] [source]¶
Reads an array of nullable 16-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_int32(field_name: str) Optional[List[int]] [source]¶
Reads an array of 32-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_nullable_int32(field_name: str) Optional[List[Optional[int]]] [source]¶
Reads an array of nullable 32-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_int64(field_name: str) Optional[List[int]] [source]¶
Reads an array of 64-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_nullable_int64(field_name: str) Optional[List[Optional[int]]] [source]¶
Reads an array of nullable 64-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_float32(field_name: str) Optional[List[float]] [source]¶
Reads an array of 32-bit IEEE 754 floating point numbers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_nullable_float32(field_name: str) Optional[List[Optional[float]]] [source]¶
Reads an array of nullable 32-bit IEEE 754 floating point numbers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_float64(field_name: str) Optional[List[float]] [source]¶
Reads an array of 64-bit IEEE 754 floating point numbers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_nullable_float64(field_name: str) Optional[List[Optional[float]]] [source]¶
Reads an array of nullable 64-bit IEEE 754 floating point numbers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_string(field_name: str) Optional[List[Optional[str]]] [source]¶
Reads an array of UTF-8 encoded strings.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_decimal(field_name: str) Optional[List[Optional[Decimal]]] [source]¶
Reads an array of arbitrary precision and scale floating point numbers.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_time(field_name: str) Optional[List[Optional[time]]] [source]¶
Reads an array of times consisting of hour, minute, second, and nanoseconds.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_date(field_name: str) Optional[List[Optional[date]]] [source]¶
Reads an array of dates consisting of year, month, and day.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_timestamp(field_name: str) Optional[List[Optional[datetime]]] [source]¶
Reads an array of timestamps consisting of date and time.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_timestamp_with_timezone(field_name: str) Optional[List[Optional[datetime]]] [source]¶
Reads an array of timestamp with timezones consisting of date, time and timezone offset.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- abstract read_array_of_compact(field_name: str) Optional[List[Optional[Any]]] [source]¶
Reads an array of compact objects.
- Parameters:
field_name – Name of the field.
- Returns:
The value of the field.
- Raises:
HazelcastSerializationError – If the field does not exist in the schema or the type of the field does not match with the one defined in the schema.
- class CompactWriter[source]¶
Bases:
ABC
Provides means of writing compact serialized fields to the binary data.
- abstract write_boolean(field_name: str, value: bool) None [source]¶
Writes a boolean.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_nullable_boolean(field_name: str, value: Optional[bool]) None [source]¶
Writes a nullable boolean.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_int8(field_name: str, value: int) None [source]¶
Writes an 8-bit two’s complement signed integer.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_nullable_int8(field_name: str, value: Optional[int]) None [source]¶
Writes a nullable 8-bit two’s complement signed integer.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_int16(field_name: str, value: int) None [source]¶
Writes a 16-bit two’s complement signed integer.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_nullable_int16(field_name: str, value: Optional[int]) None [source]¶
Writes a nullable 16-bit two’s complement signed integer.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_int32(field_name: str, value: int) None [source]¶
Writes a 32-bit two’s complement signed integer.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_nullable_int32(field_name: str, value: Optional[int]) None [source]¶
Writes a nullable 32-bit two’s complement signed integer.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_int64(field_name: str, value: int) None [source]¶
Writes a 64-bit two’s complement signed integer.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_nullable_int64(field_name: str, value: Optional[int]) None [source]¶
Writes a nullable 64-bit two’s complement signed integer.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_float32(field_name: str, value: float) None [source]¶
Writes a 32-bit IEEE 754 floating point number.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_nullable_float32(field_name: str, value: Optional[float]) None [source]¶
Writes a nullable 32-bit IEEE 754 floating point number.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_float64(field_name: str, value: float) None [source]¶
Writes a 64-bit IEEE 754 floating point number.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_nullable_float64(field_name: str, value: Optional[float]) None [source]¶
Writes a nullable 64-bit IEEE 754 floating point number.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_string(field_name: str, value: Optional[str]) None [source]¶
Writes an UTF-8 encoded string.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_decimal(field_name: str, value: Optional[Decimal]) None [source]¶
Writes an arbitrary precision and scale floating point number.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_time(field_name: str, value: Optional[time]) None [source]¶
Writes a time consisting of hour, minute, second, and nanoseconds.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_date(field_name: str, value: Optional[date]) None [source]¶
Writes a date consisting of year, month, and day.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_timestamp(field_name: str, value: Optional[datetime]) None [source]¶
Writes a timestamp consisting of date and time.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_timestamp_with_timezone(field_name: str, value: Optional[datetime]) None [source]¶
Writes a timestamp with timezone consisting of date, time, and timezone offset.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_compact(field_name: str, value: Optional[Any]) None [source]¶
Writes a nested compact object.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_boolean(field_name: str, value: Optional[List[bool]]) None [source]¶
Writes an array of booleans.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_nullable_boolean(field_name: str, value: Optional[List[Optional[bool]]]) None [source]¶
Writes an array of nullable booleans.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_int8(field_name: str, value: Optional[List[int]]) None [source]¶
Writes an array of 8-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_nullable_int8(field_name: str, value: Optional[List[Optional[int]]]) None [source]¶
Writes an array of nullable 8-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_int16(field_name: str, value: Optional[List[int]]) None [source]¶
Writes an array of 16-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_nullable_int16(field_name: str, value: Optional[List[Optional[int]]]) None [source]¶
Writes an array of nullable 16-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_int32(field_name: str, value: Optional[List[int]]) None [source]¶
Writes an array of 32-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_nullable_int32(field_name: str, value: Optional[List[Optional[int]]]) None [source]¶
Writes an array of nullable 32-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_int64(field_name: str, value: Optional[List[int]]) None [source]¶
Writes an array of 64-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_nullable_int64(field_name: str, value: Optional[List[Optional[int]]]) None [source]¶
Writes an array of nullable 64-bit two’s complement signed integers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_float32(field_name: str, value: Optional[List[float]]) None [source]¶
Writes an array of 32-bit IEEE 754 floating point numbers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_nullable_float32(field_name: str, value: Optional[List[Optional[float]]]) None [source]¶
Writes an array of nullable 32-bit IEEE 754 floating point numbers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_float64(field_name: str, value: Optional[List[float]]) None [source]¶
Writes an array of 64-bit IEEE 754 floating point numbers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_nullable_float64(field_name: str, value: Optional[List[Optional[float]]]) None [source]¶
Writes an array of nullable 64-bit IEEE 754 floating point numbers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_string(field_name: str, value: Optional[List[Optional[str]]]) None [source]¶
Writes an array of UTF-8 encoded strings.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_decimal(field_name: str, value: Optional[List[Optional[Decimal]]]) None [source]¶
Writes an array of arbitrary precision and scale floating point numbers.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_time(field_name: str, value: Optional[List[Optional[time]]]) None [source]¶
Writes an array of times consisting of hour, minute, second, and nanoseconds.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_date(field_name: str, value: Optional[List[Optional[date]]]) None [source]¶
Writes an array of dates consisting of year, month, and day.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_timestamp(field_name: str, value: Optional[List[Optional[datetime]]]) None [source]¶
Writes an array of timestamps consisting of date and time.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_timestamp_with_timezone(field_name: str, value: Optional[List[Optional[datetime]]]) None [source]¶
Writes an array of timestamps with timezone consisting of date, time, and timezone offset.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- abstract write_array_of_compact(field_name: str, value: Optional[List[Optional[Any]]]) None [source]¶
Writes an array of nested compact objects.
- Parameters:
field_name – Name of the field.
value – Value to be written.
- Raises:
hazelcast.errors.HazelcastSerializationError – If the list contains different item types.
- CompactSerializableType¶
Type of the Compact serializable classes.
alias of TypeVar(‘CompactSerializableType’)
- class CompactSerializer(*args, **kwds)[source]¶
Bases:
Generic
[CompactSerializableType
],ABC
Defines the contract of the serializers used for Compact serialization.
After defining a serializer for the objects of the class
CompactSerializableType
, the serializer can be registered to thehazelcast.config.Config.compact_serializers
.write()
andread()
methods must be consistent with each other.- abstract read(reader: CompactReader) CompactSerializableType [source]¶
Deserializes the object from the reader.
- Parameters:
reader – Reader to read fields of an object.
- Returns:
The object read.
- Raises:
hazelcast.errors.HazelcastSerializationError – In case of failure to read.
- abstract write(writer: CompactWriter, obj: CompactSerializableType) None [source]¶
Serializes the object to writer.
- Parameters:
writer – Writer to serialize the fields.
obj – Object to be serialized.
- Raises:
hazelcast.errors.HazelcastSerializationError – In case of failure to write.
- class FieldKind(value)[source]¶
Bases:
IntEnum
Represents the types of the fields used in the Compact serialization.
- NOT_AVAILABLE = 0¶
Represents fields that do not exist.
- BOOLEAN = 1¶
- ARRAY_OF_BOOLEAN = 2¶
- INT8 = 3¶
- ARRAY_OF_INT8 = 4¶
- CHAR = 5¶
- ARRAY_OF_CHAR = 6¶
- INT16 = 7¶
- ARRAY_OF_INT16 = 8¶
- INT32 = 9¶
- ARRAY_OF_INT32 = 10¶
- INT64 = 11¶
- ARRAY_OF_INT64 = 12¶
- FLOAT32 = 13¶
- ARRAY_OF_FLOAT32 = 14¶
- FLOAT64 = 15¶
- ARRAY_OF_FLOAT64 = 16¶
- STRING = 17¶
- ARRAY_OF_STRING = 18¶
- DECIMAL = 19¶
- ARRAY_OF_DECIMAL = 20¶
- TIME = 21¶
- ARRAY_OF_TIME = 22¶
- DATE = 23¶
- ARRAY_OF_DATE = 24¶
- TIMESTAMP = 25¶
- ARRAY_OF_TIMESTAMP = 26¶
- TIMESTAMP_WITH_TIMEZONE = 27¶
- ARRAY_OF_TIMESTAMP_WITH_TIMEZONE = 28¶
- COMPACT = 29¶
- ARRAY_OF_COMPACT = 30¶
- PORTABLE = 31¶
- ARRAY_OF_PORTABLE = 32¶
- NULLABLE_BOOLEAN = 33¶
- ARRAY_OF_NULLABLE_BOOLEAN = 34¶
- NULLABLE_INT8 = 35¶
- ARRAY_OF_NULLABLE_INT8 = 36¶
- NULLABLE_INT16 = 37¶
- ARRAY_OF_NULLABLE_INT16 = 38¶
- NULLABLE_INT32 = 39¶
- ARRAY_OF_NULLABLE_INT32 = 40¶
- NULLABLE_INT64 = 41¶
- ARRAY_OF_NULLABLE_INT64 = 42¶
- NULLABLE_FLOAT32 = 43¶
- ARRAY_OF_NULLABLE_FLOAT32 = 44¶
- NULLABLE_FLOAT64 = 45¶
- ARRAY_OF_NULLABLE_FLOAT64 = 46¶
SQL¶
- class SqlExpectedResultType[source]¶
Bases:
object
The expected statement result type.
- ANY = 0¶
The statement may produce either rows or an update count.
- ROWS = 1¶
The statement must produce rows. An exception is thrown is the statement produces an update count.
- UPDATE_COUNT = 2¶
The statement must produce an update count. An exception is thrown is the statement produces rows.
- class SqlService(internal_sql_service)[source]¶
Bases:
object
A service to execute SQL statements.
Warning
In order to use this service, Jet engine must be enabled on the members and the
hazelcast-sql
module must be in the classpath of the members.If you are using the CLI, Docker image, or distributions to start Hazelcast members, then you don’t need to do anything, as the above preconditions are already satisfied for such members.
However, if you are using Hazelcast members in the embedded mode, or receiving errors saying that
The Jet engine is disabled
orCannot execute SQL query because "hazelcast-sql" module is not in the classpath.
while executing queries, enable the Jet engine following one of the instructions pointed out in the error message, or add thehazelcast-sql
module to your member’s classpath.Overview
Hazelcast is currently able to execute distributed SQL queries using the following connectors:
IMap (to query data stored in a
Map
)Kafka
Files
SQL statements are not atomic. INSERT/SINK can fail and commit part of the data.
Usage
Before you can access any object using SQL, a mapping has to be created. See the documentation for the
CREATE MAPPING
command.When a query is executed, an
SqlResult
is returned. You may get row iterator from the result. The result must be closed at the end. The iterator will close the result automatically when it is exhausted given that no error is raised during the iteration. The code snippet below demonstrates a typical usage pattern:client = hazelcast.HazelcastClient() result = client.sql.execute("SELECT * FROM person").result() for row in result: print(row.get_object("person_id")) print(row.get_object("name")) ...
See the documentation of the
SqlResult
for more information about different iteration methods.- execute(sql: str, *params: Any, cursor_buffer_size: int = 4096, timeout: float = -1, expected_result_type: int = 0, schema: Optional[str] = None) Future[SqlResult] [source]¶
Executes an SQL statement.
- Parameters:
sql – SQL string.
*params – Query parameters that will replace the placeholders at the server-side. You may define parameter placeholders in the query with the
?
character. For every placeholder, a parameter value must be provided.cursor_buffer_size –
The cursor buffer size measured in the number of rows.
When a statement is submitted for execution, a
SqlResult
is returned as a result. When rows are ready to be consumed, they are put into an internal buffer of the cursor. This parameter defines the maximum number of rows in that buffer. When the threshold is reached, the backpressure mechanism will slow down the execution, possibly to a complete halt, to prevent out-of-memory.Only positive values are allowed.
The default value is expected to work well for most workloads. A bigger buffer size may give you a slight performance boost for queries with large result sets at the cost of increased memory consumption.
Defaults to
4096
.timeout –
The execution timeout in seconds.
If the timeout is reached for a running statement, it will be cancelled forcefully.
Zero value means no timeout.
-1
means that the value from the server-side config will be used. Other negative values are prohibited.Defaults to
-1
.expected_result_type – The expected result type.
schema –
The schema name.
The engine will try to resolve the non-qualified object identifiers from the statement in the given schema. If not found, the default search path will be used.
The schema name is case sensitive. For example,
foo
andFoo
are different schemas.The default value is
None
meaning only the default search path is used.
- Returns:
The execution result.
- Raises:
HazelcastSqlError – In case of execution error.
AssertionError – If the
sql
parameter is not a string, theschema
is not a string orNone
, thetimeout
is not an integer or float, or thecursor_buffer_size
is not an integer.ValueError – If the
sql
parameter is an empty string, thetimeout
is negative and not equal to-1
, thecursor_buffer_size
is not positive.TypeError – If the
expected_result_type
does not equal to one of the values or names of the members of theSqlExpectedResultType
.
- class SqlColumnMetadata(name, column_type, nullable, is_nullable_exists)[source]¶
Bases:
object
Metadata of a column in an SQL row.
- property name: str¶
Name of the column.
- property type: int¶
Type of the column.
- property nullable: bool¶
True
if this column values can beNone
,False
otherwise.
- class SqlColumnType[source]¶
Bases:
object
- VARCHAR = 0¶
Represented by
str
.
- BOOLEAN = 1¶
Represented by
bool
.
- TINYINT = 2¶
Represented by
int
.
- SMALLINT = 3¶
Represented by
int
.
- INTEGER = 4¶
Represented by
int
.
- BIGINT = 5¶
Represented by
int
.
- DECIMAL = 6¶
Represented by
decimal.Decimal
.
- REAL = 7¶
Represented by
float
.
- DOUBLE = 8¶
Represented by
float
.
- DATE = 9¶
Represented by
datetime.date
.
- TIME = 10¶
Represented by
datetime.time
.
- TIMESTAMP = 11¶
Represented by
datetime.datetime
withNone
tzinfo
.
- TIMESTAMP_WITH_TIME_ZONE = 12¶
Represented by
datetime.datetime
withnon-None
tzinfo
.
- OBJECT = 13¶
Could be represented by any Python class.
- NULL = 14¶
The type of the generic SQL
NULL
literal.The only valid value of
NULL
type isNone
.
- JSON = 15¶
Represented by
hazelcast.core.HazelcastJsonValue
.
- exception HazelcastSqlError(originating_member_uuid, code, message, cause, suggestion=None)[source]¶
Bases:
HazelcastError
Represents an error occurred during the SQL query execution.
- property originating_member_uuid: UUID¶
UUID of the member that caused or initiated an error condition.
- property suggestion: str¶
Suggested SQL statement to remediate experienced error.
- class SqlRowMetadata(columns)[source]¶
Bases:
object
Metadata for the returned rows.
- COLUMN_NOT_FOUND = -1¶
Constant indicating that the column is not found.
- property columns: List[SqlColumnMetadata]¶
List of column metadata.
- property column_count: int¶
Number of columns in the row.
- get_column(index: int) SqlColumnMetadata [source]¶
- Parameters:
index – Zero-based column index.
- Returns:
Metadata for the given column index.
- Raises:
IndexError – If the index is out of bounds.
AssertionError – If the index is not an integer.
- find_column(column_name: str) int [source]¶
- Parameters:
column_name – Name of the column.
- Returns:
Column index or
COLUMN_NOT_FOUND
if a column with the given name is not found.- Raises:
AssertionError – If the column name is not a string.
- class SqlRow(row_metadata, row)[source]¶
Bases:
object
One of the rows of an SQL query result.
The columns of the rows can be retrieved using
get_object()
with column name.get_object_with_index()
with column index.
Apart from these methods, the row objects can also be treated as a
dict
orlist
and columns can be retrieved using the[]
operator.If an integer value is passed to the
[]
operator, it will implicitly call theget_object_with_index()
and return the result.For any other type passed into the
[]
operator,get_object()
will be called. Note that,get_object()
expectsstr
values. Hence, the[]
operator will raise error for any type other than integer and string.- get_object(column_name: str) Any [source]¶
Gets the value in the column indicated by the column name.
Column name should be one of those defined in
SqlRowMetadata
, case-sensitive. You may also useSqlRowMetadata.find_column()
to test for column existence.The type of the returned value depends on the SQL type of the column. No implicit conversions are performed on the value.
- Parameters:
column_name – The column name.
- Returns:
Value of the column.
- Raises:
ValueError – If a column with the given name does not exist.
AssertionError – If the column name is not a string.
- get_object_with_index(column_index: int) Any [source]¶
Gets the value of the column by index.
The class of the returned value depends on the SQL type of the column. No implicit conversions are performed on the value.
- Parameters:
column_index – Zero-based column index.
- Returns:
Value of the column.
- Raises:
IndexError – If the column index is out of bounds.
AssertionError – If the column index is not an integer.
- property metadata: SqlRowMetadata¶
The row metadata.
- class SqlResult(sql_service, connection, query_id, cursor_buffer_size, execute_response)[source]¶
Bases:
Iterable
[SqlRow
]SQL query result.
Depending on the statement type it represents a stream of rows or an update count.
To iterate over the stream of rows, there are two possible options.
The first, and the easiest one is to iterate over the rows in a blocking fashion.
result = client.sql.execute("SELECT ...").result() for row in result: # Process the row. print(row)
The second option is to use the non-blocking API with callbacks.
future = client.sql.execute("SELECT ...") # Future of SqlResult def on_response(sql_result_future): iterator = sql_result_future.result().iterator() def on_next_row(row_future): try: row = row_future.result() # Process the row. print(row) # Iterate over the next row. next(iterator).add_done_callback(on_next_row) except StopIteration: # Exhausted the iterator. No more rows are left. pass next(iterator).add_done_callback(on_next_row) future.add_done_callback(on_response)
When in doubt, use the blocking API shown in the first code sample.
Note that, iterators can be requested at most once per SqlResult.
One can call
close()
method of a result object to release the resources associated with the result on the server side. It might also be used to cancel query execution on the server side if it is still active.When the blocking API is used, one might also use
with
statement to automatically close the query even if an exception is thrown in the iteration.with client.sql.execute("SELECT ...").result() as result: for row in result: # Process the row. print(row)
To get the number of rows updated by the query, use the
update_count()
.update_count = client.sql.execute("UPDATE ...").result().update_count()
One does not have to call
close()
in this case, because the result will already be closed in the server-side.- iterator() Iterator[Future[SqlRow]] [source]¶
Returns the iterator over the result rows.
The iterator may be requested only once.
- Raises:
ValueError – If the result only contains an update count, or the iterator is already requested.
- Returns:
Iterator that produces Future of
SqlRow
s. See the class documentation for the correct way to use this.
- update_count() int [source]¶
Returns the number of rows updated by the statement or
-1
if this result is a row set. In case the result doesn’t contain rows but the update count isn’t applicable or known,0
is returned.
- get_row_metadata() SqlRowMetadata [source]¶
Gets the row metadata.
- Raises:
ValueError – If the result only contains an update count.
- close() Future[None] [source]¶
Release the resources associated with the query result.
The query engine delivers the rows asynchronously. The query may become inactive even before all rows are consumed. The invocation of this command will cancel the execution of the query on all members if the query is still active. Otherwise it is no-op. For a result with an update count it is always no-op.
The returned Future results with:
HazelcastSqlError
: In case there is an error closing the result.
Transaction¶
- TWO_PHASE = 1¶
The two phase commit is separated in 2 parts. First it tries to execute the prepare; if there are any conflicts, the prepare will fail. Once the prepare has succeeded, the commit (writing the changes) can be executed.
Hazelcast also provides three phase transaction by automatically copying the backlog to another member so that in case of failure during a commit, another member can continue the commit from backup.
- ONE_PHASE = 2¶
The one phase transaction executes a transaction using a single step at the end; committing the changes. There is no prepare of the transactions, so conflicts are not detected. If there is a conflict, then when the transaction commits the changes, some of the changes are written and others are not; leaving the system in a potentially permanent inconsistent state.
- class TransactionManager(context)[source]¶
Bases:
object
Manages the execution of client transactions and provides Transaction objects.
- new_transaction(timeout: float, durability: int, transaction_type: int) Transaction [source]¶
Creates a Transaction object with given timeout, durability and transaction type.
- Parameters:
timeout – The timeout in seconds determines the maximum lifespan of a transaction.
durability – The durability is the number of machines that can take over if a member fails during a transaction commit or rollback.
transaction_type – the transaction type which can be
hazelcast.transaction.TWO_PHASE
orhazelcast.transaction.ONE_PHASE
.
- Returns:
New created Transaction.
- class Transaction(context, connection, timeout, durability, transaction_type)[source]¶
Bases:
object
Provides transactional operations: beginning/committing transactions, but also retrieving transactional data-structures like the TransactionalMap.
- state = 'not_started'¶
- id: Optional[UUID] = None¶
- start_time: Optional[float] = None¶
- thread_id: Optional[int] = None¶
- get_list(name: str) TransactionalList [source]¶
Returns the transactional list instance with the specified name.
- Parameters:
name – The specified name.
- Returns:
The instance of Transactional List with the specified name.
- get_map(name: str) TransactionalMap [source]¶
Returns the transactional map instance with the specified name.
- Parameters:
name – The specified name.
- Returns:
The instance of Transactional Map with the specified name.
- get_multi_map(name: str) TransactionalMultiMap [source]¶
Returns the transactional multimap instance with the specified name.
- Parameters:
name – The specified name.
- Returns:
The instance of Transactional MultiMap with the specified name.
- get_queue(name: str) TransactionalQueue [source]¶
Returns the transactional queue instance with the specified name.
- Parameters:
name – The specified name.
- Returns:
The instance of Transactional Queue with the specified name.
- get_set(name: str) TransactionalSet [source]¶
Returns the transactional set instance with the specified name.
- Parameters:
name – The specified name.
- Returns:
The instance of Transactional Set with the specified name.
Util¶
- class LoadBalancer[source]¶
Bases:
object
Load balancer allows you to send operations to one of a number of endpoints (Members). It is up to the implementation to use different load balancing policies.
If the client is configured with smart routing, only the operations that are not key based will be routed to the endpoint
- init(cluster_service)[source]¶
Initializes the load balancer.
- Parameters:
cluster_service (hazelcast.cluster.ClusterService) – The cluster service to select members from
- class RoundRobinLB[source]¶
Bases:
_AbstractLoadBalancer
A load balancer implementation that relies on using round robin to a next member to send a request to.
Round robin is done based on best effort basis, the order of members for concurrent calls to the next() is not guaranteed.
Getting Started¶
This chapter provides information on how to get started with your Hazelcast Python client. It outlines the requirements, installation and configuration of the client, setting up a cluster, and provides a simple application that uses a distributed map in Python client.
Requirements¶
Windows, Linux/UNIX or Mac OS X
Python 3.6 or newer
Java 8 or newer
Hazelcast 4.0 or newer
Latest Hazelcast Python client
Working with Hazelcast Clusters¶
Hazelcast Python client requires a working Hazelcast cluster to run. This cluster handles storage and manipulation of the user data. Clients are a way to connect to the Hazelcast cluster and access such data.
Hazelcast cluster consists of one or more cluster members. These members generally run on multiple virtual or physical machines and are connected to each other via network. Any data put on the cluster is partitioned to multiple members transparent to the user. It is therefore very easy to scale the system by adding new members as the data grows. Hazelcast cluster also offers resilience. Should any hardware or software problem causes a crash to any member, the data on that member is recovered from backups and the cluster continues to operate without any downtime. Hazelcast clients are an easy way to connect to a Hazelcast cluster and perform tasks on distributed data structures that live on the cluster.
In order to use Hazelcast Python client, we first need to setup a Hazelcast cluster.
Setting Up a Hazelcast Cluster¶
There are following options to start a Hazelcast cluster easily:
You can use our Docker images.
docker run -p 5701:5701 hazelcast/hazelcast:5.3.0
You can use Hazelcast CLI.
You can run standalone members by downloading and running distribution files from the website.
You can embed members to your Java projects.
We are going to download distribution files from the website and run a standalone member for this guide.
Running Standalone JARs¶
Follow the instructions below to create a Hazelcast cluster:
Go to Hazelcast’s download page and download either the
.zip
or.tar
distribution of Hazelcast.Decompress the contents into any directory that you want to run members from.
Change into the directory that you decompressed the Hazelcast content and then into the
bin
directory.Use either
hz-start
orhz-start.bat
depending on your operating system. Once you run the start script, you should see the Hazelcast logs in the terminal.
You should see a log similar to the following, which means that your 1-member cluster is ready to be used:
Sep 03, 2020 2:21:57 PM com.hazelcast.core.LifecycleService
INFO: [192.168.1.10]:5701 [dev] [4.1-SNAPSHOT] [192.168.1.10]:5701 is STARTING
Sep 03, 2020 2:21:58 PM com.hazelcast.internal.cluster.ClusterService
INFO: [192.168.1.10]:5701 [dev] [4.1-SNAPSHOT]
Members {size:1, ver:1} [
Member [192.168.1.10]:5701 - 7362c66f-ef9f-4a6a-a003-f8b33dfd292a this
]
Sep 03, 2020 2:21:58 PM com.hazelcast.core.LifecycleService
INFO: [192.168.1.10]:5701 [dev] [4.1-SNAPSHOT] [192.168.1.10]:5701 is STARTED
Adding User Library to CLASSPATH¶
When you want to use features such as querying and language
interoperability, you might need to add your own Java classes to the
Hazelcast member in order to use them from your Python client. This can
be done by adding your own compiled code to the CLASSPATH
. To do
this, compile your code with the CLASSPATH
and add the compiled
files to the user-lib
directory in the extracted
hazelcast-<version>.zip
(or tar
). Then, you can start your
Hazelcast member by using the start scripts in the bin
directory.
The start scripts will automatically add your compiled classes to the
CLASSPATH
.
Note that if you are adding an IdentifiedDataSerializable
or a
Portable
class, you need to add its factory too. Then you should
configure the factory in the hazelcast.xml
configuration file. This
file resides in the bin
directory where you extracted the
hazelcast-<version>.zip
(or tar
).
The following is an example configuration when you are adding an
IdentifiedDataSerializable
class:
<hazelcast>
...
<serialization>
<data-serializable-factories>
<data-serializable-factory factory-id=<identified-factory-id>>
IdentifiedFactoryClassName
</data-serializable-factory>
</data-serializable-factories>
</serialization>
...
</hazelcast>
If you want to add a Portable
class, you should use
<portable-factories>
instead of <data-serializable-factories>
in
the above configuration.
See the Hazelcast Reference Manual for more information on setting up the clusters.
Downloading and Installing¶
You can download and install the Python client from PyPI using pip. Run the following command:
pip install hazelcast-python-client
Alternatively, it can be installed from the source using the following command:
python setup.py install
Basic Configuration¶
If you are using Hazelcast and Python client on the same computer, generally the default configuration should be fine. This is great for trying out the client. However, if you run the client on a different computer than any of the cluster members, you may need to do some simple configurations such as specifying the member addresses.
The Hazelcast members and clients have their own configuration options. You may need to reflect some of the member side configurations on the client side to properly connect to the cluster.
This section describes the most common configuration elements to get you started in no time. It discusses some member side configuration options to ease the understanding of Hazelcast’s ecosystem. Then, the client side configuration options regarding the cluster connection are discussed. The configurations for the Hazelcast data structures that can be used in the Python client are discussed in the following sections.
See the Hazelcast Reference Manual and Configuration Overview section for more information.
Configuring Hazelcast¶
Hazelcast aims to run out-of-the-box for most common scenarios. However if you have limitations on your network such as multicast being disabled, you may have to configure your Hazelcast members so that they can find each other on the network. Also, since most of the distributed data structures are configurable, you may want to configure them according to your needs. We will show you the basics about network configuration here.
You can use the following options to configure Hazelcast:
Using the
hazelcast.xml
configuration file.Programmatically configuring the member before starting it from the Java code.
Since we use standalone servers, we will use the hazelcast.xml
file
to configure our cluster members.
When you download and unzip hazelcast-<version>.zip
(or tar
),
you see the hazelcast.xml
in the bin
directory. When a Hazelcast
member starts, it looks for the hazelcast.xml
file to load the
configuration from. A sample hazelcast.xml
is shown below.
<hazelcast>
<cluster-name>dev</cluster-name>
<network>
<port auto-increment="true" port-count="100">5701</port>
<join>
<multicast enabled="true">
<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>
</multicast>
<tcp-ip enabled="false">
<interface>127.0.0.1</interface>
<member-list>
<member>127.0.0.1</member>
</member-list>
</tcp-ip>
</join>
<ssl enabled="false"/>
</network>
<partition-group enabled="false"/>
<map name="default">
<backup-count>1</backup-count>
</map>
</hazelcast>
We will go over some important configuration elements in the rest of this section.
<cluster-name>
: Specifies which cluster this member belongs to. A member connects only to the other members that are in the same cluster as itself. You may give your clusters different names so that they can live in the same network without disturbing each other. Note that the cluster name should be the same across all members and clients that belong to the same cluster.<network>
<port>
: Specifies the port number to be used by the member when it starts. Its default value is 5701. You can specify another port number, and if you setauto-increment
totrue
, then Hazelcast will try the subsequent ports until it finds an available port or theport-count
is reached.<join>
: Specifies the strategies to be used by the member to find other cluster members. Choose which strategy you want to use by setting itsenabled
attribute totrue
and the others tofalse
.<multicast>
: Members find each other by sending multicast requests to the specified address and port. It is very useful if IP addresses of the members are not static.<tcp>
: This strategy uses a pre-configured list of known members to find an already existing cluster. It is enough for a member to find only one cluster member to connect to the cluster. The rest of the member list is automatically retrieved from that member. We recommend putting multiple known member addresses there to avoid disconnectivity should one of the members in the list is unavailable at the time of connection.
These configuration elements are enough for most connection scenarios. Now we will move onto the configuration of the Python client.
Configuring Hazelcast Python Client¶
To configure your Hazelcast Python client, you need to pass
configuration options as keyword arguments to your client at the
startup. The names of the configuration options is similar to
hazelcast.xml
configuration file used when configuring the member,
but flatter. It is done this way to make it easier to transfer Hazelcast
skills to multiple platforms.
This section describes some network configuration settings to cover common use cases in connecting the client to a cluster. See the Configuration Overview section and the following sections for information about detailed network configurations and/or additional features of Hazelcast Python client configuration.
import hazelcast
client = hazelcast.HazelcastClient(
cluster_members=[
"some-ip-address:port"
],
cluster_name="name-of-your-cluster",
)
It’s also possible to omit the keyword arguments in order to use the default settings.
import hazelcast
client = hazelcast.HazelcastClient()
If you run the Hazelcast members on a different server than the client, you most probably have configured the members’ ports and cluster names as explained in the previous section. If you did, then you need to make match those changes to the network settings of your client.
Cluster Name Setting¶
You need to provide the name of the cluster, if it is defined on the server side, to which you want the client to connect.
import hazelcast
client = hazelcast.HazelcastClient(
cluster_name="name-of-your-cluster",
)
Network Settings¶
You need to provide the IP address and port of at least one member in your cluster so the client can find it.
import hazelcast
client = hazelcast.HazelcastClient(
cluster_members=["some-ip-address:port"]
)
Basic Usage¶
Now that we have a working cluster and we know how to configure both our cluster and client, we can run a simple program to use a distributed map in the Python client.
import logging
import hazelcast
# Enable logging to see the logs
logging.basicConfig(level=logging.INFO)
# Connect to Hazelcast cluster
client = hazelcast.HazelcastClient()
client.shutdown()
This should print logs about the cluster members such as address, port
and UUID to the stderr
.
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is STARTING
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is STARTED
INFO:hazelcast.connection:Trying to connect to Address(host=127.0.0.1, port=5701)
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is CONNECTED
INFO:hazelcast.connection:Authenticated with server Address(host=172.17.0.2, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, server version: 4.0, local address: Address(host=127.0.0.1, port=56718)
INFO:hazelcast.cluster:
Members [1] {
Member [172.17.0.2]:5701 - 7682c357-3bec-4841-b330-6f9ae0c08253
}
INFO:hazelcast.client:Client started
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is SHUTTING_DOWN
INFO:hazelcast.connection:Removed connection to Address(host=127.0.0.1, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, connection: Connection(id=0, live=False, remote_address=Address(host=172.17.0.2, port=5701))
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is DISCONNECTED
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is SHUTDOWN
Congratulations. You just started a Hazelcast Python client.
Using a Map
Let’s manipulate a distributed map (similar to Python’s builtin dict
)
on a cluster using the client.
import hazelcast
client = hazelcast.HazelcastClient()
personnel_map = client.get_map("personnel-map")
personnel_map.put("Alice", "IT")
personnel_map.put("Bob", "IT")
personnel_map.put("Clark", "IT")
print("Added IT personnel. Printing all known personnel")
for person, department in personnel_map.entry_set().result():
print("%s is in %s department" % (person, department))
client.shutdown()
Output
Added IT personnel. Printing all known personnel
Alice is in IT department
Clark is in IT department
Bob is in IT department
You see this example puts all the IT personnel into a cluster-wide
personnel-map
and then prints all the known personnel.
Now, run the following code.
import hazelcast
client = hazelcast.HazelcastClient()
personnel_map = client.get_map("personnel-map")
personnel_map.put("Denise", "Sales")
personnel_map.put("Erwing", "Sales")
personnel_map.put("Faith", "Sales")
print("Added Sales personnel. Printing all known personnel")
for person, department in personnel_map.entry_set().result():
print("%s is in %s department" % (person, department))
client.shutdown()
Output
Added Sales personnel. Printing all known personnel
Denise is in Sales department
Erwing is in Sales department
Faith is in Sales department
Alice is in IT department
Clark is in IT department
Bob is in IT department
Note
For the sake of brevity we are going to omit boilerplate
parts, like import
s, in the later code snippets. Refer to
the Code Samples section to see samples
with the complete code.
You will see this time we added only the sales employees but we got the list of all known employees including the ones in IT. This is because our map lives in the cluster and no matter which client we use, we can access the whole map.
You may wonder why we have used result()
method over the
entry_set()
method of the personnel_map
. This is because the
Hazelcast Python client is designed to be fully asynchronous. Every
method call over distributed objects such as put()
, get()
,
entry_set()
, etc. will return a Future
object that is similar to
the
Future
class of the
concurrent.futures
module.
With this design choice, method calls over the distributed objects can be executed asynchronously without blocking the execution order of your program.
You may get the value returned by the method calls using the
result()
method of the Future
class. This will block the
execution of your program and will wait until the future finishes
running. Then, it will return the value returned by the call which are
key-value pairs in our entry_set()
method call.
You may also attach a function to the future objects that will be called, with the future as its only argument, when the future finishes running.
For example, the part where we printed the personnel in above code can
be rewritten with a callback attached to the entry_set()
, as shown
below..
def entry_set_cb(future):
for person, department in future.result():
print("%s is in %s department" % (person, department))
personnel_map.entry_set().add_done_callback(entry_set_cb)
time.sleep(1) # wait for Future to complete
Asynchronous operations are far more efficient in single threaded Python
interpreter but you may want all of your method calls over distributed
objects to be blocking. For this purpose, Hazelcast Python client
provides a helper method called blocking()
. This method blocks the
execution of your program for all the method calls over distributed
objects until the return value of your call is calculated and returns
that value directly instead of a Future
object.
To make the personnel_map
presented previously in this section
blocking, you need to call blocking()
method over it.
personnel_map = client.get_map("personnel-map").blocking()
Now, all the methods over the personnel_map
, such as put()
and
entry_set()
, will be blocking. So, you don’t need to call
result()
over it or attach a callback to it anymore.
for person, department in personnel_map.entry_set():
print("%s is in %s department" % (person, department))
Code Samples¶
See the Hazelcast Python examples for more code samples.
Features¶
Hazelcast Python client supports the following data structures and features:
Map
Queue
Set
List
MultiMap
Replicated Map
Ringbuffer
ReliableTopic
Topic
CRDT PN Counter
Flake Id Generator
Distributed Executor Service
Event Listeners
Sub-Listener Interfaces for Map Listener
Entry Processor
Transactional Map
Transactional MultiMap
Transactional Queue
Transactional List
Transactional Set
SQL
Query (Predicates)
Entry Processor
Built-in Predicates
Listener with Predicate
Aggregations
Projections
Near Cache Support
Programmatic Configuration
SSL Support (requires Enterprise server)
Mutual Authentication (requires Enterprise server)
Authorization
Management Center Integration / Awareness
Client Near Cache Stats
Client Runtime Stats
Client Operating Systems Stats
Hazelcast Viridian Discovery
Smart Client
Unisocket Client
Lifecycle Service
IdentifiedDataSerializable Serialization
Portable Serialization
Custom Serialization
JSON Serialization
Global Serialization
Connection Strategy
Connection Retry
Configuration Overview¶
The client can be configured either by keyword arguments or by a configuration object.
Keyword Arguments Configuration¶
It is possible to pass keyword arguments directly to the client’s constructor to configure desired aspects of the client.
The keyword argument names must be valid property names of the
hazelcast.config.Config
class with valid values.
from hazelcast import HazelcastClient
client = HazelcastClient(
cluster_name="a-cluster",
cluster_members=["127.0.0.1:5701"],
)
Using a Configuration Object¶
Alternatively, you can create a configuration object, and pass it to the client as its only argument.
This way might provide better user experience as it provides hints for the configuration option names and their types.
from hazelcast import HazelcastClient
from hazelcast.config import Config
config = Config()
config.cluster_name = "a-cluster"
config.cluster_members = ["127.0.0.1:5701"]
client = HazelcastClient(config)
Serialization¶
Serialization is the process of converting an object into a stream of bytes to store the object in the memory, a file or database, or transmit it through the network. Its main purpose is to save the state of an object in order to be able to recreate it when needed. The reverse process is called deserialization. Hazelcast offers you its own native serialization methods. You will see these methods throughout this chapter.
Hazelcast serializes all your objects before sending them to the server.
The bool
, int
, float
, str
, bytearray
, list
,
datetime.date
, datetime.time
, datetime.datetime
, and
decimal.Decimal
types are serialized natively and you cannot override
this behavior. The following table is the conversion of types for the
Java server side.
Python |
Java |
---|---|
bool |
Boolean |
int |
Byte, Short, Integer, Long, java.math.BigInteger |
float |
Float, Double |
str |
String |
bytearray |
byte[] |
list |
java.util.ArrayList |
datetime.date |
java.time.LocalDate |
datetime.time |
java.time.LocalTime |
datetime.datetime |
java.time.OffsetDateTime |
decimal.Decimal |
java.math.BigDecimal |
Note
An int
is serialized as Integer
by
default. You can configure this behavior using the
default_int_type
argument.
Arrays of the above types can be serialized as boolean[]
,
byte[]
, short[]
, int[]
, float[]
, double[]
,
long[]
and string[]
for the Java server side, respectively.
Serialization Priority
When Hazelcast Python client serializes an object:
It first checks whether the object is
None
.If the above check fails, then it checks if there is a
CompactSerializer
registered for the class of the object.If the above check fails, then it checks if it is an instance of
IdentifiedDataSerializable
.If the above check fails, then it checks if it is an instance of
Portable
.If the above check fails, then it checks if it is an instance of one of the default types (see the default types above).
If the above check fails, then it looks for a user-specified Custom Serialization.
If the above check fails, it will use the registered Global Serialization if one exists.
If the above check fails, then the Python client uses
pickle
by default.
However, cPickle/pickle Serialization
is not the best way of
serialization in terms of performance and interoperability between the
clients in different languages. If you want the serialization to work
faster or you use the clients in different languages, Hazelcast offers
its own native serialization types, such as
Compact Serialization,
IdentifiedDataSerializable Serialization, and
Portable Serialization.
On top of all, if you want to use your own serialization type, you can use a Custom Serialization.
Compact Serialization¶
As an enhancement to existing serialization methods, Hazelcast offers Compact serialization with the following main features:
Separates the schema from the data and stores it per type, not per object which results in less memory and bandwidth usage compared to other formats
Does not require a class to extend another class or change the source code of the class in any way
Supports schema evolution which permits adding or removing fields, or changing the types of fields
Platform and language independent
Supports partial deserialization of fields during queries or indexing
Hazelcast achieves these features by having a well-known schemas of objects and replicating them across the cluster which enables members and clients to fetch schemas they don’t have in their local registries. Each serialized object carries just a schema identifier and relies on the schema distribution service or configuration to match identifiers with the actual schema. Once the schemas are fetched, they are cached locally on the members and clients so that the next operations that use the schema do not incur extra costs.
Schemas help Hazelcast to identify the locations of the fields on the serialized binary data. With this information, Hazelcast can deserialize individual fields of the data, without reading the whole binary. This results in a better query and indexing performance.
Schemas can evolve freely by adding or removing fields. Even, the types of the fields can be changed. Multiple versions of the schema may live in the same cluster and both the old and new readers may read the compatible parts of the data. This feature is especially useful in rolling upgrade scenarios.
The Compact serialization does not require any changes in the user classes as it does not need a class to extend another class. Serializers might be implemented and registered separately from the classes.
The underlying format of the Compact serialized objects is platform and language independent.
Using Compact Serialization¶
Compact serialization can be used by writing a serializer that extends the
CompactSerializer
for a class and registering it in the client configuration.
For example, assume that you have the following Employee
class:
class Employee:
def __init__(self, name: str, age: int):
self.name = name
self.age = age
Then, a serializer for it can be implemented as below:
from hazelcast.serialization.api import CompactSerializer, CompactWriter, CompactReader
class EmployeeSerializer(CompactSerializer[Employee]):
def read(self, reader: CompactReader) -> Employee:
name = reader.read_string("name")
age = reader.read_int32("age")
return Employee(name, age)
def write(self, writer: CompactWriter, obj: Employee) -> None:
writer.write_string("name", obj.name)
writer.write_int32("age", obj.age)
def get_type_name(self) -> str:
return "employee"
def get_class(self) -> typing.Type[Employee]:
return Employee
The last step is to register the serializer in the client configuration.
client = HazelcastClient(
compact_serializers=[
EmployeeSerializer(),
]
)
A schema will be created from the serializer, and a unique schema identifier will be assigned to it automatically.
From now on, Hazelcast will serialize instances of the Employee
class
using the EmployeeSerializer
.
Schema Evolution¶
Compact serialization permits schemas and classes to evolve by adding or removing fields, or by changing the types of fields. More than one version of a class may live in the same cluster and different clients or members might use different versions of the class.
Hazelcast handles the versioning internally. So, you don’t have to change anything in the classes or serializers apart from the added, removed, or changed fields.
Hazelcast achieves this by identifying each version of the class by a unique fingerprint. Any change in a class results in a different fingerprint. Hazelcast uses a 64-bit Rabin Fingerprint to assign identifiers to schemas, which has an extremely low collision rate.
Different versions of the schema with different identifiers are replicated in the cluster and can be fetched by clients or members internally. That allows old readers to read fields of the classes they know when they try to read data serialized by a new writer. Similarly, new readers might read fields of the classes available in the data, when they try to read data serialized by an old writer.
Assume that the two versions of the following Employee
class lives in the
cluster.
class Employee:
def __init__(self, name: str, age: int):
self.name = name
self.age = age
class Employee:
def __init__(self, name: str, age: int, is_active: bool):
self.name = name
self.age = age
self.is_active = is_active # Newly added field
Then, when faced with binary data serialized by the new writer, old readers will be able to read the following fields.
class EmployeeSerializer(CompactSerializer[Employee]):
def read(self, reader: CompactReader) -> Employee:
name = reader.read_string("name")
age = reader.read_int32("age")
# The new "is_active" field is there, but the old reader does not
# know anything about it. Hence, it will simply ignore that field.
return Employee(name, age)
...
Then, when faced with binary data serialized by the old writer, new readers will be able to read the following fields. Also, Hazelcast provides convenient APIs to read default values when there is no such field in the data.
class EmployeeSerializer(CompactSerializer[Employee]):
def read(self, reader: CompactReader) -> Employee:
name = reader.read_string("name")
age = reader.read_int32("age")
# Read the "is_active" if it exists, or use the default value `False`.
# reader.read_boolean("is_active") would throw if the "is_active"
# field does not exist in data.
if reader.get_field_kind("is_active") == FieldKind.BOOLEAN:
is_active = reader.read_boolean("is_active)
else:
is_active = False
return Employee(name, age, is_active)
...
Note that, when an old reader reads data written by an old writer, or a new reader reads a data written by a new writer, they will be able to read all fields.
One thing to be careful while evolving the class is to not have any conditional
code in the write
method. That method must write all the fields available
in the current version of the class to the writer, with appropriate field names
and types. Hazelcast uses the write method of the serializer to extract a
schema out of the object, hence any conditional code that may or may not run
depending on the object in that method might result in an undefined behavior.
Additionally, evolved serializers must have the same type name with the initial version of the serializer.
IdentifiedDataSerializable Serialization¶
For a faster serialization of objects, Hazelcast recommends to extend
the IdentifiedDataSerializable
class.
The following is an example of a class that extends
IdentifiedDataSerializable
:
from hazelcast.serialization.api import IdentifiedDataSerializable
class Address(IdentifiedDataSerializable):
def __init__(self, street=None, zip_code=None, city=None, state=None):
self.street = street
self.zip_code = zip_code
self.city = city
self.state = state
def get_class_id(self):
return 1
def get_factory_id(self):
return 1
def write_data(self, output):
output.write_string(self.street)
output.write_int(self.zip_code)
output.write_string(self.city)
output.write_string(self.state)
def read_data(self, input):
self.street = input.read_string()
self.zip_code = input.read_int()
self.city = input.read_string()
self.state = input.read_string()
Note
Refer to ObjectDataInput
/ObjectDataOutput
classes in
the hazelcast.serialization.api
package to understand methods
available on the input
/output
objects.
The IdentifiedDataSerializable uses get_class_id()
and
get_factory_id()
methods to reconstitute the object. To complete the
implementation, an IdentifiedDataSerializable
factory should also be
created and registered into the client using the
data_serializable_factories
argument. A factory is a dictionary that
stores class ID and the IdentifiedDataSerializable
class type pairs
as the key and value. The factory’s responsibility is to store the right
IdentifiedDataSerializable
class type for the given class ID.
A sample IdentifiedDataSerializable
factory could be created as
follows:
factory = {
1: Address
}
Note that the keys of the dictionary should be the same as the class IDs
of their corresponding IdentifiedDataSerializable
class types.
Note
For IdentifiedDataSerializable to work in Python client, the
class that inherits it should have default valued parameters in its
__init__
method so that an instance of that class can be created
without passing any arguments to it.
The last step is to register the IdentifiedDataSerializable
factory
to the client.
client = hazelcast.HazelcastClient(
data_serializable_factories={
1: factory
}
)
Note that the ID that is passed as the key of the factory is same as the
factory ID that the Address
class returns.
Portable Serialization¶
As an alternative to the existing serialization methods, Hazelcast
offers portable serialization. To use it, you need to extend the
Portable
class. Portable serialization has the following advantages:
Supporting multiversion of the same object type.
Fetching individual fields without having to rely on the reflection.
Querying and indexing support without deserialization and/or reflection.
In order to support these features, a serialized Portable
object
contains meta information like the version and concrete location of the
each field in the binary data. This way Hazelcast is able to navigate in
the binary data and deserialize only the required field without actually
deserializing the whole object which improves the query performance.
With multiversion support, you can have two members each having different versions of the same object; Hazelcast stores both meta information and uses the correct one to serialize and deserialize portable objects depending on the member. This is very helpful when you are doing a rolling upgrade without shutting down the cluster.
Also note that portable serialization is completely language independent and is used as the binary protocol between Hazelcast server and clients.
A sample portable implementation of a Foo
class looks like the
following:
from hazelcast.serialization.api import Portable
class Foo(Portable):
def __init__(self, foo=None):
self.foo = foo
def get_class_id(self):
return 1
def get_factory_id(self):
return 1
def write_portable(self, writer):
writer.write_string("foo", self.foo)
def read_portable(self, reader):
self.foo = reader.read_string("foo")
Note
Refer to PortableReader
/PortableWriter
classes in the
hazelcast.serialization.api
package to understand methods
available on the reader
/writer
objects.
Note
For Portable to work in Python client, the class that
inherits it should have default valued parameters in its __init__
method so that an instance of that class can be created without
passing any arguments to it.
Similar to IdentifiedDataSerializable
, a Portable
class must
provide the get_class_id()
and get_factory_id()
methods. The
factory dictionary will be used to create the Portable
object given
the class ID.
A sample Portable
factory could be created as follows:
factory = {
1: Foo
}
Note that the keys of the dictionary should be the same as the class IDs
of their corresponding Portable
class types.
The last step is to register the Portable
factory to the client.
client = hazelcast.HazelcastClient(
portable_factories={
1: factory
}
)
Note that the ID that is passed as the key of the factory is same as the
factory ID that Foo
class returns.
Versioning for Portable Serialization¶
More than one version of the same class may need to be serialized and deserialized. For example, a client may have an older version of a class and the member to which it is connected may have a newer version of the same class.
Portable serialization supports versioning. It is a global versioning, meaning that all portable classes that are serialized through a member get the globally configured portable version.
You can declare the version using the portable_version
argument, as
shown below.
client = hazelcast.HazelcastClient(
portable_version=1
)
If you update the class by changing the type of one of the fields or by
adding a new field, it is a good idea to upgrade the version of the
class, rather than sticking to the global version specified in the
configuration. In the Python client, you can achieve this by simply
adding the get_class_version()
method to your class’s implementation
of Portable
, and returning class version different than the default
global version.
Note
If you do not use the get_class_version()
method in your
Portable
implementation, it will have the global version, by
default.
Here is an example implementation of creating a version 2 for the above Foo class:
from hazelcast.serialization.api import Portable
class Foo(Portable):
def __init__(self, foo=None, foo2=None):
self.foo = foo
self.foo2 = foo2
def get_class_id(self):
return 1
def get_factory_id(self):
return 1
def get_class_version(self):
return 2
def write_portable(self, writer):
writer.write_string("foo", self.foo)
writer.write_string("foo2", self.foo2)
def read_portable(self, reader):
self.foo = reader.read_string("foo")
self.foo2 = reader.read_string("foo2")
You should consider the following when you perform versioning:
It is important to change the version whenever an update is performed in the serialized fields of a class, for example by incrementing the version.
If a client performs a Portable deserialization on a field and then that Portable is updated by removing that field on the cluster side, this may lead to problems such as an AttributeError being raised when an older version of the client tries to access the removed field.
Portable serialization does not use reflection and hence, fields in the class and in the serialized content are not automatically mapped. Field renaming is a simpler process. Also, since the class ID is stored, renaming the Portable does not lead to problems.
Types of fields need to be updated carefully. Hazelcast performs basic type upgradings, such as
int
tofloat
.
Example Portable Versioning Scenarios:¶
Assume that a new client joins to the cluster with a class that has been modified and class’s version has been upgraded due to this modification.
If you modified the class by adding a new field, the new client’s put operations include that new field. If this new client tries to get an object that was put from the older clients, it gets null for the newly added field.
If you modified the class by removing a field, the old clients get null for the objects that are put by the new client.
If you modified the class by changing the type of a field to an
incompatible type (such as from int
to str
), a TypeError
(wrapped as HazelcastSerializationError
) is generated as the client
tries accessing an object with the older version of the class. The same
applies if a client with the old version tries to access a new version
object.
If you did not modify a class at all, it works as usual.
Custom Serialization¶
Hazelcast lets you plug a custom serializer to be used for serialization of objects.
Let’s say you have a class called Musician
and you would like to
customize the serialization for it, since you may want to use an
external serializer for only one class.
class Musician:
def __init__(self, name):
self.name = name
Let’s say your custom MusicianSerializer
will serialize
Musician
. This time, your custom serializer must extend the
StreamSerializer
class.
from hazelcast.serialization.api import StreamSerializer
class MusicianSerializer(StreamSerializer):
def get_type_id(self):
return 10
def destroy(self):
pass
def write(self, output, obj):
output.write_string(obj.name)
def read(self, input):
name = input.read_string()
return Musician(name)
Note that the serializer id
must be unique as Hazelcast will use it
to lookup the MusicianSerializer
while it deserializes the object.
Now the last required step is to register the MusicianSerializer
to
the client.
client = hazelcast.HazelcastClient(
custom_serializers={
Musician: MusicianSerializer
}
)
From now on, Hazelcast will use MusicianSerializer
to serialize
Musician
objects.
JSON Serialization¶
You can use the JSON formatted strings as objects in Hazelcast cluster. Creating JSON objects in the cluster does not require any server side coding and hence you can just send a JSON formatted string object to the cluster and query these objects by fields.
In order to use JSON serialization, you should use the
HazelcastJsonValue
object for the key or value.
HazelcastJsonValue
is a simple wrapper and identifier for the JSON
formatted strings. You can get the JSON string from the
HazelcastJsonValue
object using the to_string()
method.
You can construct HazelcastJsonValue
from strings or JSON
serializable Python objects. If a Python object is provided to the
constructor, HazelcastJsonValue
tries to convert it to a JSON
string. If an error occurs during the conversion, it is raised directly.
If a string argument is provided to the constructor, it is used as it
is.
In the constructor, no JSON parsing is performed. It is your responsibility to provide correctly formatted JSON strings. The client will not validate the string, it will send it to the cluster as it is. If you submit incorrectly formatted JSON strings and, later, if you query those objects, it is highly possible that you will get formatting errors since the server will fail to deserialize or find the query fields.
Here is an example of how you can construct a HazelcastJsonValue
and
put to the map:
# From JSON string
json_map.put("item1", HazelcastJsonValue("{\"age\": 4}"))
# # From JSON serializable object
json_map.put("item2", HazelcastJsonValue({"age": 20}))
You can query JSON objects in the cluster using the Predicate
of
your choice. An example JSON query for querying the values whose age is
less than 6 is shown below:
# Get the objects whose age is less than 6
result = json_map.values(less_or_equal("age", 6))
print("Retrieved %s values whose age is less than 6." % len(result))
print("Entry is", result[0].to_string())
Global Serialization¶
The global serializer is identical to custom serializers from the implementation perspective. The global serializer is registered as a fallback serializer to handle all other objects if a serializer cannot be located for them.
By default, cPickle/pickle
serialization is used if the class is not
IdentifiedDataSerializable
or Portable
or there is no custom
serializer for it. When you configure a global serializer, it is used
instead of cPickle/pickle
serialization.
Use Cases:
Third party serialization frameworks can be integrated using the global serializer.
For your custom objects, you can implement a single serializer to handle all of them.
A sample global serializer that integrates with a third party serializer is shown below.
import some_third_party_serializer
from hazelcast.serialization.api import StreamSerializer
class GlobalSerializer(StreamSerializer):
def get_type_id(self):
return 20
def destroy(self):
pass
def write(self, output, obj):
output.write_string(some_third_party_serializer.serialize(obj))
def read(self, input):
return some_third_party_serializer.deserialize(input.read_string())
You should register the global serializer to the client.
client = hazelcast.HazelcastClient(
global_serializer=GlobalSerializer
)
Setting Up Client Network¶
Main parts of network related configuration for Hazelcast Python client may be tuned via the arguments described in this section.
Here is an example of configuring the network for Python client.
client = hazelcast.HazelcastClient(
cluster_members=[
"10.1.1.21",
"10.1.1.22:5703"
],
smart_routing=True,
redo_operation=False,
connection_timeout=6.0
)
Providing Member Addresses¶
Address list is the initial list of cluster addresses which the client will connect to. The client uses this list to find an alive member. Although it may be enough to give only one address of a member in the cluster (since all members communicate with each other), it is recommended that you give the addresses for all the members.
client = hazelcast.HazelcastClient(
cluster_members=[
"10.1.1.21",
"10.1.1.22:5703"
]
)
If the port part is omitted, then ports 5701
, 5702
, and 5703
will
be tried in a random order.
You can specify multiple addresses with or without the port information
as seen above. The provided list is shuffled and tried in a random
order. Its default value is localhost
.
Setting Smart Routing¶
Smart routing defines whether the client mode is smart or unisocket. See the Python Client Operation Modes section for the description of smart and unisocket modes.
client = hazelcast.HazelcastClient(
smart_routing=True,
)
Its default value is True
(smart client mode).
Enabling Redo Operation¶
It enables/disables redo-able operations. While sending the requests to
the related members, the operations can fail due to various reasons.
Read-only operations are retried by default. If you want to enable retry
for the other operations, you can set the redo_operation
to
True
.
client = hazelcast.HazelcastClient(
redo_operation=False
)
Its default value is False
(disabled).
Setting Connection Timeout¶
Connection timeout is the timeout value in seconds for the members to accept the client connection requests.
client = hazelcast.HazelcastClient(
connection_timeout=6.0
)
Its default value is 5.0
seconds.
Enabling Client TLS/SSL¶
You can use TLS/SSL to secure the connection between the clients and members. If you want to enable TLS/SSL for the client-cluster connection, you should set the SSL configuration. Please see the TLS/SSL section.
As explained in the TLS/SSL section, Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast Python clients have certificate authorities used to define which members they can trust. Hazelcast has the mutual authentication feature which allows the Python clients also to have their private keys and public certificates, and members to have their certificate authorities so that the members can know which clients they can trust. See the Mutual Authentication section.
Enabling Hazelcast Viridian Discovery¶
Hazelcast Python client can discover and connect to Hazelcast clusters
running on Hazelcast Viridian. For this,
provide authentication information as cluster_name
and enable Viridian
discovery by setting your cloud_discovery_token
as shown below.
client = hazelcast.HazelcastClient(
cluster_name="name-of-your-cluster",
cloud_discovery_token="discovery-token"
)
If you have enabled encryption for your cluster, you should also enable TLS/SSL configuration for the client to secure communication between your client and cluster members as described in the TLS/SSL for Hazelcast Python Clients section.
External Smart Client Discovery¶
Warning
This feature requires Hazelcast 4.2 or higher version.
The client sends requests directly to cluster members in the smart client mode (default) in order to reduce hops to accomplish operations. Because of that, the client should know the addresses of members in the cluster.
In cloud-like environments, or Kubernetes, there are usually two network interfaces: the private and public network interfaces. When the client is in the same network as the members, it uses their private network addresses. Otherwise, if the client and the Hazelcast cluster are on different networks, the client cannot connect to members using their private network addresses. Hazelcast 4.2 introduced External Smart Client Discovery to solve that issue. The client needs to communicate with all cluster members via their public IP addresses in this case. Whenever Hazelcast cluster members are able to resolve their own public external IP addresses, they pass this information to the client. As a result, the client can use public addresses for communication.
In order to use this feature, make sure your cluster members are accessible
from the network the client resides in, then set use_public_ip
configuration option to True
while constructing the client. You should also
specify the public address of at least one member in the configuration:
client = hazelcast.HazelcastClient(
cluster_members=["myserver.publicaddress.com:5701"],
use_public_ip=True,
)
This solution works everywhere without further configuration (Kubernetes, AWS, GCP, Azure, etc.) as long as the corresponding plugin is enabled in Hazelcast server configuration.
Configuring Backup Acknowledgment¶
When an operation with sync backup is sent by a client to the Hazelcast member(s), the acknowledgment of the operation’s backup is sent to the client by the backup replica member(s). This improves the performance of the client operations.
To disable backup acknowledgement, you should use the
backup_ack_to_client_enabled
configuration option.
client = hazelcast.HazelcastClient(
backup_ack_to_client_enabled=False,
)
Its default value is True
. This option has no effect for unisocket
clients.
You can also fine-tune this feature using the config options as described below:
operation_backup_timeout
: Default value is5
seconds. If an operation has backups, this property specifies how long the invocation waits for acks from the backup replicas. If acks are not received from some of the backups, there will not be any rollback on the other successful replicas.fail_on_indeterminate_operation_state
: Default value isFalse
. When it isTrue
, if an operation has sync backups and acks are not received from backup replicas in time, or the member which owns primary replica of the target partition leaves the cluster, then the invocation fails. However, even if the invocation fails, there will not be any rollback on other successful replicas.
Client Connection Strategy¶
Hazelcast Python client can be configured to connect to a cluster in an async manner during the client start and reconnecting after a cluster disconnect. Both of these options are configured via arguments below.
You can configure the client’s starting mode as async or sync using the
configuration element async_start
. When it is set to True
(async), the behavior of hazelcast.HazelcastClient()
call changes.
It returns a client instance without waiting to establish a cluster
connection. In this case, the client rejects any network dependent
operation with ClientOfflineError
immediately until it connects to
the cluster. If it is False
, the call is not returned and the client
is not created until a connection with the cluster is established. Its
default value is False
(sync).
You can also configure how the client reconnects to the cluster after a
disconnection. This is configured using the configuration element
reconnect_mode
; it has three options:
OFF
: Client rejects to reconnect to the cluster and triggers the shutdown process.ON
: Client opens a connection to the cluster in a blocking manner by not resolving any of the waiting invocations.ASYNC
: Client opens a connection to the cluster in a non-blocking manner by resolving all the waiting invocations withClientOfflineError
.
Its default value is ON
.
The example configuration below show how to configure a Python client’s starting and reconnecting modes.
from hazelcast.config import ReconnectMode
client = hazelcast.HazelcastClient(
async_start=False,
# You can also set this to "ON"
# without importing anything.
reconnect_mode=ReconnectMode.ON
)
Configuring Client Connection Retry¶
The client searches for new connections when it is trying to connect to the cluster. Both the frequency of connection attempts and the client shutdown behavior can be configured using the arguments below.
client = hazelcast.HazelcastClient(
retry_initial_backoff=1,
retry_max_backoff=15,
retry_multiplier=1.5,
retry_jitter=0.2,
cluster_connect_timeout=120
)
The following are configuration element descriptions:
retry_initial_backoff
: Specifies how long to wait (backoff), in seconds, after the first failure before retrying. Its default value is1
. It must be non-negative.retry_max_backoff
: Specifies the upper limit for the backoff in seconds. Its default value is30
. It must be non-negative.retry_multiplier
: Factor to multiply the backoff after a failed retry. Its default value is1.05
. It must be greater than or equal to1
.retry_jitter
: Specifies by how much to randomize backoffs. Its default value is0
. It must be in range0
to1
.cluster_connect_timeout
: Timeout value in seconds for the client to give up connecting to the cluster. Its default value is-1
. The client will continuously try to connect by default.
A pseudo-code is as follows:
begin_time = get_current_time()
current_backoff = INITIAL_BACKOFF
while (try_connect(connection_timeout)) != SUCCESS) {
if (get_current_time() - begin_time >= CLUSTER_CONNECT_TIMEOUT) {
// Give up to connecting to the current cluster and switch to another if exists.
// CLUSTER_CONNECT_TIMEOUT is infinite by default.
}
sleep(current_backoff + uniform_random(-JITTER * current_backoff, JITTER * current_backoff))
current_backoff = min(current_backoff * MULTIPLIER, MAX_BACKOFF)
}
Note that, try_connect
above tries to connect to any member that the
client knows, and for each connection we have a connection timeout; see
the Setting Connection Timeout
section.
Using Python Client with Hazelcast¶
This chapter provides information on how you can use Hazelcast data structures in the Python client, after giving some basic information including an overview to the client API, operation modes of the client and how it handles the failures.
Python Client API Overview¶
Hazelcast Python client is designed to be fully asynchronous. See the Basic Usage section to learn more about the asynchronous nature of the Python Client.
If you are ready to go, let’s start to use Hazelcast Python client.
The first step is configuration. See the Configuration Overview section for details.
The following is an example on how to configure and initialize the
HazelcastClient
to connect to the cluster:
client = hazelcast.HazelcastClient(
cluster_name="dev",
cluster_members=[
"198.51.100.2"
]
)
This client object is your gateway to access all the Hazelcast distributed objects.
Let’s create a map and populate it with some data, as shown below.
# Get a Map called 'my-distributed-map'
customer_map = client.get_map("customers").blocking()
# Write and read some data
customer_map.put("1", "John Stiles")
customer_map.put("2", "Richard Miles")
customer_map.put("3", "Judy Doe")
As the final step, if you are done with your client, you can shut it down as shown below. This will release all the used resources and close connections to the cluster.
client.shutdown()
Python Client Operation Modes¶
The client has two operation modes because of the distributed nature of the data and cluster: smart and unisocket. Refer to the Setting Smart Routing section to see how to configure the client for different operation modes.
Smart Client¶
In the smart mode, the clients connect to all the cluster members. Since each data partition uses the well known and consistent hashing algorithm, each client can send an operation to the relevant cluster member, which increases the overall throughput and efficiency. Smart mode is the default mode.
Unisocket Client¶
For some cases, the clients can be required to connect to a single member instead of each member in the cluster. Firewalls, security or some custom networking issues can be the reason for these cases.
In the unisocket client mode, the client will only connect to one of the configured member addresses. This single member will behave as a gateway to the other members. For any operation requested from the client, it will redirect the request to the relevant member and return the response back to the client returned from this member.
Handling Failures¶
There are two main failure cases you should be aware of. Below sections explain these and the configurations you can perform to achieve proper behavior.
Handling Client Connection Failure¶
While the client is trying to connect initially to one of the members in
the cluster_members
, all the members might not be available.
Instead of giving up, throwing an error and stopping the client, the
client retries to connect as configured. This behavior is described in
the
Configuring Client Connection Retry
section.
The client executes each operation through the already established connection to the cluster. If this connection(s) disconnects or drops, the client will try to reconnect as configured.
Handling Retry-able Operation Failure¶
While sending the requests to the related members, the operations can
fail due to various reasons. Read-only operations are retried by
default. If you want to enable retrying for the other operations, you
can set the redo_operation
to True
. See the
Enabling Redo Operation section.
You can set a timeout for retrying the operations sent to a member. This
can be tuned by passing the invocation_timeout
argument to the
client. The client will retry an operation within this given period, of
course, if it is a read-only operation or you enabled the
redo_operation
as stated in the above. This timeout value is
important when there is a failure resulted by either of the following
causes:
Member throws an exception.
Connection between the client and member is closed.
Client’s heartbeat requests are timed out.
When a connection problem occurs, an operation is retried if it is
certain that it has not run on the member yet or if it is idempotent
such as a read-only operation, i.e., retrying does not have a side
effect. If it is not certain whether the operation has run on the
member, then the non-idempotent operations are not retried. However, as
explained in the first paragraph of this section, you can force all the
client operations to be retried (redo_operation
) when there is a
connection failure between the client and member. But in this case, you
should know that some operations may run multiple times causing
conflicts. For example, assume that your client sent a queue.offer
operation to the member and then the connection is lost. Since there
will be no response for this operation, you will not know whether it has
run on the member or not. I f you enabled redo_operation
, it means
this operation may run again, which may cause two instances of the same
object in the queue.
When invocation is being retried, the client may wait some time before
it retries again. This duration can be configured using the
invocation_retry_pause
argument.
The default retry pause time is 1
second.
Using Distributed Data Structures¶
Most of the distributed data structures are supported by the Python client. In this chapter, you will learn how to use these distributed data structures.
Using Map¶
Hazelcast Map is a distributed dictionary. Through the Python client, you can perform operations like reading and writing from/to a Hazelcast Map with the well known get and put methods. For details, see the Map section in the Hazelcast Reference Manual.
A Map usage example is shown below.
# Get a Map called 'my-distributed-map'
my_map = client.get_map("my-distributed-map").blocking()
# Run Put and Get operations
my_map.put("key", "value")
my_map.get("key")
# Run concurrent Map operations (optimistic updates)
my_map.put_if_absent("somekey", "somevalue")
my_map.replace_if_same("key", "value", "newvalue")
Using MultiMap¶
Hazelcast MultiMap is a distributed and specialized map where you can store multiple values under a single key. For details, see the MultiMap section in the Hazelcast Reference Manual.
A MultiMap usage example is shown below.
# Get a MultiMap called 'my-distributed-multimap'
multi_map = client.get_multi_map("my-distributed-multimap").blocking()
# Put values in the map against the same key
multi_map.put("my-key", "value1")
multi_map.put("my-key", "value2")
multi_map.put("my-key", "value3")
# Read and print out all the values for associated with key called 'my-key'
# Outputs '['value2', 'value1', 'value3']'
values = multi_map.get("my-key")
print(values)
# Remove specific key/value pair
multi_map.remove("my-key", "value2")
Using Replicated Map¶
Hazelcast Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster. It provides full replication of entries to all members for high speed access. For details, see the Replicated Map section in the Hazelcast Reference Manual.
A Replicated Map usage example is shown below.
# Get a ReplicatedMap called 'my-replicated-map'
replicated_map = client.get_replicated_map("my-replicated-map").blocking()
# Put and get a value from the Replicated Map
# (key/value is replicated to all members)
replaced_value = replicated_map.put("key", "value")
# Will be None as its first update
print("replaced value = {}".format(replaced_value)) # Outputs 'replaced value = None'
# The value is retrieved from a random member in the cluster
value = replicated_map.get("key")
print("value for key = {}".format(value)) # Outputs 'value for key = value'
Using Queue¶
Hazelcast Queue is a distributed queue which enables all cluster members to interact with it. For details, see the Queue section in the Hazelcast Reference Manual.
A Queue usage example is shown below.
# Get a Queue called 'my-distributed-queue'
queue = client.get_queue("my-distributed-queue").blocking()
# Offer a string into the Queue
queue.offer("item")
# Poll the Queue and return the string
item = queue.poll()
# Timed-restricted operations
queue.offer("another-item", 0.5) # waits up to 0.5 seconds
another_item = queue.poll(5) # waits up to 5 seconds
# Indefinitely blocking Operations
queue.put("yet-another-item")
print(queue.take()) # Outputs 'yet-another-item'
Using Set¶
Hazelcast Set is a distributed set which does not allow duplicate elements. For details, see the Set section in the Hazelcast Reference Manual.
A Set usage example is shown below.
# Get a Set called 'my-distributed-set'
my_set = client.get_set("my-distributed-set").blocking()
# Add items to the Set with duplicates
my_set.add("item1")
my_set.add("item1")
my_set.add("item2")
my_set.add("item2")
my_set.add("item2")
my_set.add("item3")
# Get the items. Note that there are no duplicates.
for item in my_set.get_all():
print(item)
Using List¶
Hazelcast List is a distributed list which allows duplicate elements and preserves the order of elements. For details, see the List section in the Hazelcast Reference Manual.
A List usage example is shown below.
# Get a List called 'my-distributed-list'
my_list = client.get_list("my-distributed-list").blocking()
# Add element to the list
my_list.add("item1")
my_list.add("item2")
# Remove the first element
print("Removed:", my_list.remove_at(0)) # Outputs 'Removed: item1'
# There is only one element left
print("Current size is", my_list.size()) # Outputs 'Current size is 1'
# Clear the list
my_list.clear()
Using Ringbuffer¶
Hazelcast Ringbuffer is a replicated but not partitioned data structure that stores its data in a ring-like structure. You can think of it as a circular array with a given capacity. Each Ringbuffer has a tail and a head. The tail is where the items are added and the head is where the items are overwritten or expired. You can reach each element in a Ringbuffer using a sequence ID, which is mapped to the elements between the head and tail (inclusive) of the Ringbuffer. For details, see the Ringbuffer section in the Hazelcast Reference Manual.
A Ringbuffer usage example is shown below.
# Get a RingBuffer called "my-ringbuffer"
ringbuffer = client.get_ringbuffer("my-ringbuffer").blocking()
# Add two items into ring buffer
ringbuffer.add(100)
ringbuffer.add(200)
# We start from the oldest item.
# If you want to start from the next item, call ringbuffer.tail_sequence()+1
sequence = ringbuffer.head_sequence()
print(ringbuffer.read_one(sequence)) # Outputs '100'
sequence += 1
print(ringbuffer.read_one(sequence)) # Outputs '200'
Using ReliableTopic¶
Hazelcast ReliableTopic is a distributed topic implementation backed up by the Ringbuffer data structure. For details, see the Reliable Topic section in the Hazelcast Reference Manual.
A Reliable Topic usage example is shown below.
# Get a Topic called "my-distributed-topic"
topic = client.get_reliable_topic("my-distributed-topic").blocking()
# Add a Listener to the Topic
topic.add_listener(lambda message: print(message))
# Publish a message to the Topic
topic.publish("Hello to distributed world")
Configuring Reliable Topic¶
You may configure Reliable Topics using the reliable_topics
argument:
client = hazelcast.HazelcastClient(
reliable_topics={
"my-topic": {
"overload_policy": TopicOverloadPolicy.DISCARD_OLDEST,
"read_batch_size": 20,
}
}
)
The following are the descriptions of configuration elements and attributes:
keys of the dictionary: Name of the Reliable Topic.
overload_policy
: Policy to handle an overloaded topic. By default, set toBLOCK
.read_batch_size
: Number of messages the reliable topic will try to read in batch. It will get at least one, but if there are more available, then it will try to get more to increase throughput. By default, set to10
.
Using Topic¶
Hazelcast Topic is a distribution mechanism for publishing messages that are delivered to multiple subscribers. For details, see the Topic section in the Hazelcast Reference Manual.
A Topic usage example is shown below.
# Function to be called when a message is published
def print_on_message(topic_message):
print("Got message:", topic_message.message)
# Get a Topic called "my-distributed-topic"
topic = client.get_topic("my-distributed-topic").blocking()
# Add a Listener to the Topic
topic.add_listener(print_on_message)
# Publish a message to the Topic
topic.publish("Hello to distributed world") # Outputs 'Got message: Hello to distributed world'
Using Transactions¶
Hazelcast Python client provides transactional operations like beginning
transactions, committing transactions and retrieving transactional data
structures like the TransactionalMap
, TransactionalSet
,
TransactionalList
, TransactionalQueue
and
TransactionalMultiMap
.
You can create a Transaction
object using the Python client to
begin, commit and rollback a transaction. You can obtain
transaction-aware instances of queues, maps, sets, lists and multimaps
via the Transaction
object, work with them and commit or rollback in
one shot. For details, see the Transactions section
in the Hazelcast Reference Manual.
# Create a Transaction object and begin the transaction
transaction = client.new_transaction(timeout=10)
transaction.begin()
# Get transactional distributed data structures
txn_map = transaction.get_map("transactional-map")
txn_queue = transaction.get_queue("transactional-queue")
txn_set = transaction.get_set("transactional-set")
try:
obj = txn_queue.poll()
# Process obj
txn_map.put("1", "value1")
txn_set.add("value")
# Do other things
# Commit the above changes done in the cluster.
transaction.commit()
except Exception as ex:
# In the case of a transactional failure, rollback the transaction
transaction.rollback()
print("Transaction failed! {}".format(ex.args))
In a transaction, operations will not be executed immediately. Their
changes will be local to the Transaction
object until committed.
However, they will ensure the changes via locks.
For the above example, when txn_map.put()
is executed, no data will
be put in the map but the key will be locked against changes. While
committing, operations will be executed, the value will be put to the
map and the key will be unlocked.
The isolation level in Hazelcast Transactions is READ_COMMITTED
on
the level of a single partition. If you are in a transaction, you can
read the data in your transaction and the data that is already
committed. If you are not in a transaction, you can only read the
committed data.
One can also use context managers to simplify the usage of the transactional data structures. The example above can be simplified as below.
# Create a Transaction object and begin the transaction
with client.new_transaction(timeout=10) as transaction:
# Get transactional distributed data structures
txn_map = transaction.get_map("transactional-map")
txn_queue = transaction.get_queue("transactional-queue")
txn_set = transaction.get_set("transactional-set")
obj = txn_queue.poll()
# Process obj
txn_map.put("1", "value1")
txn_set.add("value")
# Do other things
# If everything goes well, the transaction will be
# committed, if not, it will be rolled back automatically.
Using PN Counter¶
Hazelcast PNCounter
(Positive-Negative Counter) is a CRDT
positive-negative counter implementation. It is an eventually consistent
counter given there is no member failure. For details, see the
PN Counter section
in the Hazelcast Reference Manual.
A PN Counter usage example is shown below.
# Get a PN Counter called 'pn-counter'
pn_counter = client.get_pn_counter("pn-counter").blocking()
# Counter is initialized with 0
print(pn_counter.get()) # 0
# xx_and_get() variants does the operation
# and returns the final value
print(pn_counter.add_and_get(5)) # 5
print(pn_counter.decrement_and_get()) # 4
# get_and_xx() variants returns the current
# value and then does the operation
print(pn_counter.get_and_increment()) # 4
print(pn_counter.get()) # 5
Using Flake ID Generator¶
Hazelcast FlakeIdGenerator
is used to generate cluster-wide unique
identifiers. Generated identifiers are int primitive values and are
k-ordered (roughly ordered). IDs are in the range from 0
to 2^63-1
(maximum signed 64-bit int value). For details, see the FlakeIdGenerator section
in the Hazelcast Reference Manual.
# Get a Flake ID Generator called 'flake-id-generator'
generator = client.get_flake_id_generator("flake-id-generator").blocking()
# Generate a some unique identifier
print("ID:", generator.new_id())
Configuring Flake ID Generator¶
You may configure Flake ID Generators using the flake_id_generators
argument:
client = hazelcast.HazelcastClient(
flake_id_generators={
"flake-id-generator": {
"prefetch_count": 123,
"prefetch_validity": 150
}
}
)
The following are the descriptions of configuration elements and attributes:
keys of the dictionary: Name of the Flake ID Generator.
prefetch_count
: Count of IDs which are pre-fetched on the background when one call togenerator.newId()
is made. Its value must be in the range1
-100,000
. Its default value is100
.prefetch_validity
: Specifies for how long the pre-fetched IDs can be used. After this time elapses, a new batch of IDs are fetched. Time unit is seconds. Its default value is600
seconds (10
minutes). The IDs contain a timestamp component, which ensures a rough global ordering of them. If an ID is assigned to an object that was created later, it will be out of order. If ordering is not important, set this value to0
.
CP Subsystem¶
Hazelcast 4.0 introduces CP concurrency primitives with respect to the CAP principle, i.e., they always maintain linearizability and prefer consistency to availability during network partitions and client or server failures.
All data structures within CP Subsystem are available through
client.cp_subsystem
component of the client.
Before using Atomic Long, Lock, and Semaphore, CP Subsystem has to be enabled on cluster-side. Refer to CP Subsystem documentation for more information.
Data structures in CP Subsystem run in CP groups. Each CP group elects
its own Raft leader and runs the Raft consensus algorithm independently.
The CP data structures differ from the other Hazelcast data structures
in two aspects. First, an internal commit is performed on the METADATA
CP group every time you fetch a proxy from this interface. Hence,
callers should cache returned proxy objects. Second, if you call
distributed_object.destroy()
on a CP data structure proxy, that data
structure is terminated on the underlying CP group and cannot be
reinitialized until the CP group is force-destroyed. For this reason,
please make sure that you are completely done with a CP data structure
before destroying its proxy.
Using AtomicLong¶
Hazelcast AtomicLong
is the distributed implementation of atomic
64-bit integer counter. It offers various atomic operations such as
get
, set
, get_and_set
, compare_and_set
and
increment_and_get
. This data structure is a part of CP Subsystem.
An Atomic Long usage example is shown below.
# Get an AtomicLong called "my-atomic-long"
atomic_long = client.cp_subsystem.get_atomic_long("my-atomic-long").blocking()
# Get current value
value = atomic_long.get()
print("Value:", value)
# Prints:
# Value: 0
# Increment by 42
atomic_long.add_and_get(42)
# Set to 0 atomically if the current value is 42
result = atomic_long.compare_and_set(42, 0)
print ('CAS operation result:', result)
# Prints:
# CAS operation result: True
AtomicLong implementation does not offer exactly-once / effectively-once execution semantics. It goes with at-least-once execution semantics by default and can cause an API call to be committed multiple times in case of CP member failures. It can be tuned to offer at-most-once execution semantics. Please see fail-on-indeterminate-operation-state server-side setting.
Using Lock¶
Hazelcast FencedLock
is the distributed and reentrant implementation
of a linearizable lock. It is CP with respect to the CAP principle. It
works on top of the Raft consensus algorithm. It offers linearizability
during crash-stop failures and network partitions. If a network
partition occurs, it remains available on at most one side of the
partition.
A basic Lock usage example is shown below.
# Get a FencedLock called "my-lock"
lock = client.cp_subsystem.get_lock("my-lock").blocking()
# Acquire the lock and get the fencing token
fence = lock.lock()
try:
# Your guarded code goes here
pass
finally:
# Make sure to release the lock
lock.unlock()
FencedLock works on top of CP sessions. It keeps a CP session open while the lock is acquired. Please refer to CP Session documentation for more information.
By default, FencedLock is reentrant. Once a caller acquires the lock, it
can acquire the lock reentrantly as many times as it wants in a
linearizable manner. You can configure the reentrancy behavior on the
member side. For instance, reentrancy can be disabled and FencedLock can
work as a non-reentrant mutex. You can also set a custom reentrancy
limit. When the reentrancy limit is already reached, FencedLock does not
block a lock call. Instead, it fails with
LockAcquireLimitReachedError
or a specified return value.
Distributed locks are unfortunately not equivalent to single-node mutexes because of the complexities in distributed systems, such as uncertain communication patterns, and independent and partial failures. In an asynchronous network, no lock service can guarantee mutual exclusion, because there is no way to distinguish between a slow and a crashed process. Consider the following scenario, where a Hazelcast client acquires a FencedLock, then hits a long pause. Since it will not be able to commit session heartbeats while paused, its CP session will be eventually closed. After this moment, another Hazelcast client can acquire this lock. If the first client wakes up again, it may not immediately notice that it has lost ownership of the lock. In this case, multiple clients think they hold the lock. If they attempt to perform an operation on a shared resource, they can break the system. To prevent such situations, you can choose to use an infinite session timeout, but this time probably you are going to deal with liveliness issues. For the scenario above, even if the first client actually crashes, requests sent by 2 clients can be re-ordered in the network and hit the external resource in reverse order.
There is a simple solution for this problem. Lock holders are ordered by a monotonic fencing token, which increments each time the lock is assigned to a new owner. This fencing token can be passed to external services or resources to ensure sequential execution of side effects performed by lock holders.
The following diagram illustrates the idea. Client-1 acquires the lock
first and receives 1
as its fencing token. Then, it passes this
token to the external service, which is our shared resource in this
scenario. Just after that, Client-1 hits a long GC pause and eventually
loses ownership of the lock because it misses to commit CP session
heartbeats. Then, Client-2 chimes in and acquires the lock. Similar to
Client-1, Client-2 passes its fencing token to the external service.
After that, once Client-1 comes back alive, its write request will be
rejected by the external service, and only Client-2 will be able to
safely talk to it.

CP Fenced Lock diagram¶
You can read more about the fencing token idea in Martin Kleppmann’s “How to do distributed locking” blog post and Google’s Chubby paper.
Using Semaphore¶
Hazelcast Semaphore
is the distributed implementation of a
linearizable and distributed semaphore. It offers multiple operations
for acquiring the permits. This data structure is a part of CP
Subsystem.
Semaphore is a cluster-wide counting semaphore. Conceptually, it
maintains a set of permits. Each acquire()
waits if necessary until
a permit is available, and then takes it. Dually, each release()
adds a permit, potentially releasing a waiting acquirer. However, no
actual permit objects are used; the semaphore just keeps a count of the
number available and acts accordingly.
A basic Semaphore usage example is shown below.
# Get a Semaphore called "my-semaphore"
semaphore = client.cp_subsystem.get_semaphore("my-semaphore").blocking()
# Try to initialize the semaphore
# (does nothing if the semaphore is already initialized)
semaphore.init(3)
# Acquire 3 permits out of 3
semaphore.acquire(3)
# Release 2 permits
semaphore.release(2)
# Check available permits
available = semaphore.available_permits()
print("Available:", available)
# Prints:
# Available: 2
Beware of the increased risk of indefinite postponement when using the multiple-permit acquire. If permits are released one by one, a caller waiting for one permit will acquire it before a caller waiting for multiple permits regardless of the call order. Correct usage of a semaphore is established by programming convention in the application.
As an alternative, potentially safer approach to the multiple-permit
acquire, you can use the try_acquire()
method of Semaphore. It tries
to acquire the permits in optimistic manner and immediately returns with
a bool
operation result. It also accepts an optional timeout
argument which specifies the timeout in seconds to acquire the permits
before giving up.
# Try to acquire 2 permits
success = semaphore.try_acquire(2)
# Check for the result of the acquire request
if success:
try:
pass
# Your guarded code goes here
finally:
# Make sure to release the permits
semaphore.release(2)
Semaphore data structure has two variations:
The default implementation is session-aware. In this one, when a caller makes its very first
acquire()
call, it starts a new CP session with the underlying CP group. Then, liveliness of the caller is tracked via this CP session. When the caller fails, permits acquired by this caller are automatically and safely released. However, the session-aware version comes with a limitation, that is, a Hazelcast client cannot release permits before acquiring them first. In other words, a client can release only the permits it has acquired earlier.The second implementation is sessionless. This one does not perform auto-cleanup of acquired permits on failures. Acquired permits are not bound to callers and permits can be released without acquiring first. However, you need to handle failed permit owners on your own. If a Hazelcast server or a client fails while holding some permits, they will not be automatically released. You can use the sessionless CP Semaphore implementation by enabling JDK compatibility
jdk-compatible
server-side setting. Refer to Semaphore configuration documentation for more details.
Using CountDownLatch¶
Hazelcast CountDownLatch
is the distributed implementation of a
linearizable and distributed countdown latch. This data structure is a
cluster-wide synchronization aid that allows one or more callers to wait
until a set of operations being performed in other callers completes.
This data structure is a part of CP Subsystem.
A basic CountDownLatch usage example is shown below.
# Get a CountDownLatch called "my-latch"
latch = client.cp_subsystem.get_count_down_latch("my-latch").blocking()
# Try to initialize the latch
# (does nothing if the count is not zero)
initialized = latch.try_set_count(1)
print("Initialized:", initialized)
# Check count
count = latch.get_count()
print("Count:", count)
# Prints:
# Count: 1
# Bring the count down to zero after 10ms
def run():
time.sleep(0.01)
latch.count_down()
t = Thread(target=run)
t.start()
# Wait up to 1 second for the count to become zero up
count_is_zero = latch.await(1)
print("Count is zero:", count_is_zero)
Note
CountDownLatch count can be reset with try_set_count()
after a countdown has finished, but not during an active count.
Using AtomicReference¶
Hazelcast AtomicReference
is the distributed implementation of a
linearizable object reference. It provides a set of atomic operations
allowing to modify the value behind the reference. This data structure
is a part of CP Subsystem.
A basic AtomicReference usage example is shown below.
# Get a AtomicReference called "my-ref"
my_ref = client.cp_subsystem.get_atomic_reference("my-ref").blocking()
# Set the value atomically
my_ref.set(42)
# Read the value
value = my_ref.get()
print("Value:", value)
# Prints:
# Value: 42
# Try to replace the value with "value"
# with a compare-and-set atomic operation
result = my_ref.compare_and_set(42, "value")
print("CAS result:", result)
# Prints:
# CAS result: True
The following are some considerations you need to know when you use AtomicReference:
AtomicReference works based on the byte-content and not on the object-reference. If you use the
compare_and_set()
method, do not change to the original value because its serialized content will then be different.All methods returning an object return a private copy. You can modify the private copy, but the rest of the world is shielded from your changes. If you want these changes to be visible to the rest of the world, you need to write the change back to the AtomicReference; but be careful about introducing a data-race.
The in-memory format of an AtomicReference is
binary
. The receiving side does not need to have the class definition available unless it needs to be deserialized on the other side, e.g., because a method likealter()
is executed. This deserialization is done for every call that needs to have the object instead of the binary content, so be careful with expensive object graphs that need to be deserialized.If you have an object with many fields or an object graph and you only need to calculate some information or need a subset of fields, you can use the
apply()
method. With theapply()
method, the whole object does not need to be sent over the line; only the information that is relevant is sent.
AtomicReference does not offer exactly-once / effectively-once execution semantics. It goes with at-least-once execution semantics by default and can cause an API call to be committed multiple times in case of CP member failures. It can be tuned to offer at-most-once execution semantics. Please see fail-on-indeterminate-operation-state server-side setting.
Distributed Events¶
This chapter explains when various events are fired and describes how you can add event listeners on a Hazelcast Python client. These events can be categorized as cluster and distributed data structure events.
Cluster Events¶
You can add event listeners to a Hazelcast Python client. You can configure the following listeners to listen to the events on the client side:
Membership Listener: Notifies when a member joins to/leaves the cluster.
Lifecycle Listener: Notifies when the client is starting, started, connected, disconnected, shutting down and shutdown.
Listening for Member Events¶
You can add the following types of member events to the
ClusterService
.
member_added
: A new member is added to the cluster.member_removed
: An existing member leaves the cluster.
The ClusterService
class exposes an add_listener()
method that
allows one or more functions to be attached to the member events emitted
by the class.
The following is a membership listener registration by using the
add_listener()
method.
def added_listener(member):
print("Member Added: The address is", member.address)
def removed_listener(member):
print("Member Removed. The address is", member.address)
client.cluster_service.add_listener(
member_added=added_listener,
member_removed=removed_listener,
fire_for_existing=True
)
Also, you can set the fire_for_existing
flag to True
to receive
the events for list of available members when the listener is
registered.
Membership listeners can also be added during the client startup using
the membership_listeners
argument.
client = hazelcast.HazelcastClient(
membership_listeners=[
(added_listener, removed_listener)
]
)
Listening for Distributed Object Events¶
The events for distributed objects are invoked when they are created and
destroyed in the cluster. When an event is received, listener function
will be called. The parameter passed into the listener function will be
of the type DistributedObjectEvent
. A DistributedObjectEvent
contains the following fields:
name
: Name of the distributed object.service_name
: Service name of the distributed object.event_type
: Type of the invoked event. It is eitherCREATED
orDESTROYED
.
The following is example of adding a distributed object listener to a client.
def distributed_object_listener(event):
print("Distributed object event >>>", event.name, event.service_name, event.event_type)
client.add_distributed_object_listener(
listener_func=distributed_object_listener
).result()
map_name = "test_map"
# This call causes a CREATED event
test_map = client.get_map(map_name).blocking()
# This causes no event because map was already created
test_map2 = client.get_map(map_name).blocking()
# This causes a DESTROYED event
test_map.destroy()
Output
Distributed object event >>> test_map hz:impl:mapService CREATED
Distributed object event >>> test_map hz:impl:mapService DESTROYED
Listening for Lifecycle Events¶
The lifecycle listener is notified for the following events:
STARTING
: The client is starting.STARTED
: The client has started.CONNECTED
: The client connected to a member.SHUTTING_DOWN
: The client is shutting down.DISCONNECTED
: The client disconnected from a member.SHUTDOWN
: The client has shutdown.
The following is an example of the lifecycle listener that is added to client during startup and its output.
def lifecycle_listener(state):
print("Lifecycle Event >>>", state)
client = hazelcast.HazelcastClient(
lifecycle_listeners=[
lifecycle_listener
]
)
Output:
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is STARTING
Lifecycle Event >>> STARTING
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is STARTED
Lifecycle Event >>> STARTED
INFO:hazelcast.connection:Trying to connect to Address(host=127.0.0.1, port=5701)
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is CONNECTED
Lifecycle Event >>> CONNECTED
INFO:hazelcast.connection:Authenticated with server Address(host=172.17.0.2, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, server version: 4.0, local address: Address(host=127.0.0.1, port=56732)
INFO:hazelcast.cluster:
Members [1] {
Member [172.17.0.2]:5701 - 7682c357-3bec-4841-b330-6f9ae0c08253
}
INFO:hazelcast.client:Client started
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is SHUTTING_DOWN
Lifecycle Event >>> SHUTTING_DOWN
INFO:hazelcast.connection:Removed connection to Address(host=127.0.0.1, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, connection: Connection(id=0, live=False, remote_address=Address(host=172.17.0.2, port=5701))
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is DISCONNECTED
Lifecycle Event >>> DISCONNECTED
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is SHUTDOWN
Lifecycle Event >>> SHUTDOWN
You can also add lifecycle listeners after client initialization using
the LifecycleService
.
client.lifecycle_service.add_listener(lifecycle_listener)
Distributed Data Structure Events¶
You can add event listeners to the distributed data structures.
Listening for Map Events¶
You can listen to map-wide or entry-based events by attaching functions
to the Map
objects using the add_entry_listener()
method. You
can listen the following events.
added_func
: Function to be called when an entry is added to map.removed_func
: Function to be called when an entry is removed from map.updated_func
: Function to be called when an entry is updated.evicted_func
: Function to be called when an entry is evicted from map.evict_all_func
: Function to be called when entries are evicted from map.clear_all_func
: Function to be called when entries are cleared from map.merged_func
: Function to be called when WAN replicated entry is merged.expired_func
: Function to be called when an entry’s live time is expired.
You can also filter the events using key
or predicate
. There is
also an option called include_value
. When this option is set to
true, event will also include the value.
An entry-based event is fired after the operations that affect a
specific entry. For example, map.put()
, map.remove()
or
map.evict()
. An EntryEvent
object is passed to the listener
function.
See the following example.
def added(event):
print("Entry Added: %s-%s" % (event.key, event.value))
customer_map.add_entry_listener(include_value=True, added_func=added)
customer_map.put("4", "Jane Doe")
A map-wide event is fired as a result of a map-wide operation. For
example, map.clear()
or map.evict_all()
. An EntryEvent
object is passed to the listener function.
See the following example.
def cleared(event):
print("Map Cleared:", event.number_of_affected_entries)
customer_map.add_entry_listener(include_value=True, clear_all_func=cleared)
customer_map.clear()
Distributed Computing¶
This chapter explains how you can use Hazelcast entry processor implementation in the Python client.
Using EntryProcessor¶
Hazelcast supports entry processing. An entry processor is a function that executes your code on a map entry in an atomic way.
An entry processor is a good option if you perform bulk processing on a
Map
. Usually you perform a loop of keys – executing
Map.get(key)
, mutating the value, and finally putting the entry back
in the map using Map.put(key,value)
. If you perform this process
from a client or from a member where the keys do not exist, you
effectively perform two network hops for each update: the first to
retrieve the data and the second to update the mutated value.
If you are doing the process described above, you should consider using entry processors. An entry processor executes a read and updates upon the member where the data resides. This eliminates the costly network hops described above.
Note
Entry processor is meant to process a single entry per call. Processing multiple entries and data structures in an entry processor is not supported as it may result in deadlocks on the server side.
Hazelcast sends the entry processor to each cluster member and these members apply it to the map entries. Therefore, if you add more members, your processing completes faster.
Processing Entries¶
The Map
class provides the following methods for entry processing:
execute_on_key
processes an entry mapped by a key.execute_on_keys
processes entries mapped by a list of keys.execute_on_entries
can process all entries in a map with a defined predicate. Predicate is optional.
In the Python client, an EntryProcessor
should be
IdentifiedDataSerializable
or Portable
because the server should
be able to deserialize it to process.
The following is an example for EntryProcessor
which is an
IdentifiedDataSerializable
.
from hazelcast.serialization.api import IdentifiedDataSerializable
class IdentifiedEntryProcessor(IdentifiedDataSerializable):
def __init__(self, value=None):
self.value = value
def read_data(self, object_data_input):
self.value = object_data_input.read_string()
def write_data(self, object_data_output):
object_data_output.write_string(self.value)
def get_factory_id(self):
return 5
def get_class_id(self):
return 1
Now, you need to make sure that the Hazelcast member recognizes the
entry processor. For this, you need to implement the Java equivalent of
your entry processor and its factory, and create your own compiled class
or JAR files. For adding your own compiled class or JAR files to the
server’s CLASSPATH
, see the
Adding User Library to CLASSPATH section.
The following is the Java equivalent of the entry processor in Python client given above:
import com.hazelcast.map.EntryProcessor;
import com.hazelcast.nio.ObjectDataInput;
import com.hazelcast.nio.ObjectDataOutput;
import com.hazelcast.nio.serialization.IdentifiedDataSerializable;
import java.io.IOException;
import java.util.Map;
public class IdentifiedEntryProcessor
implements EntryProcessor<String, String, String>, IdentifiedDataSerializable {
static final int CLASS_ID = 1;
private String value;
public IdentifiedEntryProcessor() {
}
@Override
public int getFactoryId() {
return IdentifiedFactory.FACTORY_ID;
}
@Override
public int getClassId() {
return CLASS_ID;
}
@Override
public void writeData(ObjectDataOutput out) throws IOException {
out.writeUTF(value);
}
@Override
public void readData(ObjectDataInput in) throws IOException {
value = in.readUTF();
}
@Override
public String process(Map.Entry<String, String> entry) {
entry.setValue(value);
return value;
}
}
You can implement the above processor’s factory as follows:
import com.hazelcast.nio.serialization.DataSerializableFactory;
import com.hazelcast.nio.serialization.IdentifiedDataSerializable;
public class IdentifiedFactory implements DataSerializableFactory {
public static final int FACTORY_ID = 5;
@Override
public IdentifiedDataSerializable create(int typeId) {
if (typeId == IdentifiedEntryProcessor.CLASS_ID) {
return new IdentifiedEntryProcessor();
}
return null;
}
}
Now you need to configure the hazelcast.xml
to add your factory as
shown below.
<hazelcast>
<serialization>
<data-serializable-factories>
<data-serializable-factory factory-id="5">
IdentifiedFactory
</data-serializable-factory>
</data-serializable-factories>
</serialization>
</hazelcast>
The code that runs on the entries is implemented in Java on the server side. The client side entry processor is used to specify which entry processor should be called. For more details about the Java implementation of the entry processor, see the Entry Processor section in the Hazelcast Reference Manual.
After the above implementations and configuration are done and you start
the server where your library is added to its CLASSPATH
, you can use
the entry processor in the Map
methods. See the following example.
distributed_map = client.get_map("my-distributed-map").blocking()
distributed_map.put("key", "not-processed")
distributed_map.execute_on_key("key", IdentifiedEntryProcessor("processed"))
print(distributed_map.get("key")) # Outputs 'processed'
Distributed Query¶
Hazelcast partitions your data and spreads it across cluster of members. You can iterate over the map entries and look for certain entries (specified by predicates) you are interested in. However, this is not very efficient because you will have to bring the entire entry set and iterate locally. Instead, Hazelcast allows you to run distributed queries on your distributed map.
How Distributed Query Works¶
The requested predicate is sent to each member in the cluster.
Each member looks at its own local entries and filters them according to the predicate. At this stage, key-value pairs of the entries are deserialized and then passed to the predicate.
The predicate requester merges all the results coming from each member into a single set.
Distributed query is highly scalable. If you add new members to the cluster, the partition count for each member is reduced and thus the time spent by each member on iterating its entries is reduced. In addition, the pool of partition threads evaluates the entries concurrently in each member, and the network traffic is also reduced since only filtered data is sent to the requester.
Predicate Module Operators
The predicate
module offered by the Python client includes many
operators for your query requirements. Some of them are explained below.
equal
: Checks if the result of an expression is equal to a given value.not_equal
: Checks if the result of an expression is not equal to a given value.instance_of
: Checks if the result of an expression has a certain type.like
: Checks if the result of an expression matches some string pattern.%
(percentage sign) is the placeholder for many characters,_
(underscore) is placeholder for only one character.ilike
: Checks if the result of an expression matches some string pattern in a case-insensitive manner.greater
: Checks if the result of an expression is greater than a certain value.greater_or_equal
: Checks if the result of an expression is greater than or equal to a certain value.less
: Checks if the result of an expression is less than a certain value.less_or_equal
: Checks if the result of an expression is less than or equal to a certain value.between
: Checks if the result of an expression is between two values (this is inclusive).in_
: Checks if the result of an expression is an element of a certain list.not_
: Checks if the result of an expression is false.regex
: Checks if the result of an expression matches some regular expression.true
: Creates an always true predicate that will pass all items.false
: Creates an always false predicate that will filter out all items.
Hazelcast offers the following ways for distributed query purposes:
Combining Predicates with AND, OR, NOT
Distributed SQL Query
Employee Map Query Example¶
Assume that you have an employee
map containing the instances of
Employee
class, as coded below.
from hazelcast.serialization.api import Portable
class Employee(Portable):
def __init__(self, name=None, age=None, active=None, salary=None):
self.name = name
self.age = age
self.active = active
self.salary = salary
def get_class_id(self):
return 100
def get_factory_id(self):
return 1000
def read_portable(self, reader):
self.name = reader.read_string("name")
self.age = reader.read_int("age")
self.active = reader.read_boolean("active")
self.salary = reader.read_double("salary")
def write_portable(self, writer):
writer.write_string("name", self.name)
writer.write_int("age", self.age)
writer.write_boolean("active", self.active)
writer.write_double("salary", self.salary)
Note that Employee
extends Portable
. As portable types are not
deserialized on the server side for querying, you don’t need to
implement its Java equivalent on the server side.
For types that are not portable, you need to implement its Java
equivalent and its data serializable factory on the server side for
server to reconstitute the objects from binary formats. In this case,
you need to compile the Employee
and related factory classes with
server’s CLASSPATH
and add them to the user-lib
directory in the
extracted hazelcast-<version>.zip
(or tar
) before starting the
server. See the Adding User Library to CLASSPATH
section.
Note
Querying with Portable
class is faster as compared to
IdentifiedDataSerializable
.
Querying by Combining Predicates with AND, OR, NOT¶
You can combine predicates by using the and_
, or_
and not_
operators, as shown in the below example.
from hazelcast.predicate import and_, equal, less
employee_map = client.get_map("employee")
predicate = and_(equal('active', True), less('age', 30))
employees = employee_map.values(predicate).result()
In the above example code, predicate
verifies whether the entry is
active and its age
value is less than 30. This predicate
is
applied to the employee
map using the Map.values
method. This
method sends the predicate to all cluster members and merges the results
coming from them.
Note
Predicates can also be applied to key_set
and
entry_set
of a map.
Querying with SQL¶
SqlPredicate
takes the regular SQL where
clause. See the
following example:
from hazelcast.predicate import sql
employee_map = client.get_map("employee")
employees = employee_map.values(sql("active AND age < 30")).result()
Supported SQL Syntax¶
AND/OR: <expression> AND <expression> AND <expression>…
active AND age > 30
active = false OR age = 45 OR name = 'Joe'
active AND ( age > 20 OR salary < 60000 )
Equality: =, !=, <, ⇐, >, >=
<expression> = value
age <= 30
name = 'Joe'
salary != 50000
BETWEEN: <attribute> [NOT] BETWEEN <value1> AND <value2>
age BETWEEN 20 AND 33 ( same as age >= 20 AND age ⇐ 33 )
age NOT BETWEEN 30 AND 40 ( same as age < 30 OR age > 40 )
IN: <attribute> [NOT] IN (val1, val2,…)
age IN ( 20, 30, 40 )
age NOT IN ( 60, 70 )
active AND ( salary >= 50000 OR ( age NOT BETWEEN 20 AND 30 ) )
age IN ( 20, 30, 40 ) AND salary BETWEEN ( 50000, 80000 )
LIKE: <attribute> [NOT] LIKE 'expression'
The %
(percentage sign) is the placeholder for multiple characters,
an _
(underscore) is the placeholder for only one character.
name LIKE 'Jo%'
(true for ‘Joe’, ‘Josh’, ‘Joseph’ etc.)name LIKE 'Jo_'
(true for ‘Joe’; false for ‘Josh’)name NOT LIKE 'Jo_'
(true for ‘Josh’; false for ‘Joe’)name LIKE 'J_s%'
(true for ‘Josh’, ‘Joseph’; false ‘John’, ‘Joe’)
ILIKE: <attribute> [NOT] ILIKE 'expression'
ILIKE is similar to the LIKE predicate but in a case-insensitive manner.
name ILIKE 'Jo%'
(true for ‘Joe’, ‘joe’, ‘jOe’,‘Josh’,‘joSH’, etc.)name ILIKE 'Jo_'
(true for ‘Joe’ or ‘jOE’; false for ‘Josh’)
REGEX: <attribute> [NOT] REGEX 'expression'
name REGEX 'abc-.*'
(true for ‘abc-123’; false for ‘abx-123’)
Querying Examples with Predicates¶
You can use the __key
attribute to perform a predicated search for
the entry keys. See the following example:
from hazelcast.predicate import sql
person_map = client.get_map("persons").blocking()
person_map.put("John", 28)
person_map.put("Mary", 23)
person_map.put("Judy", 30)
predicate = sql("__key like M%")
persons = person_map.values(predicate)
print(persons[0]) # Outputs '23'
In this example, the code creates a list with the values whose keys start with the letter “M”.
You can use the this
attribute to perform a predicated search for
the entry values. See the following example:
from hazelcast.predicate import greater_or_equal
person_map = client.get_map("persons").blocking()
person_map.put("John", 28)
person_map.put("Mary", 23)
person_map.put("Judy", 30)
predicate = greater_or_equal("this", 27)
persons = person_map.values(predicate)
print(persons[0], persons[1]) # Outputs '28 30'
In this example, the code creates a list with the values greater than or equal to “27”.
Querying with JSON Strings¶
You can query JSON strings stored inside your Hazelcast clusters. To
query the JSON string, you first need to create a HazelcastJsonValue
from the JSON string or JSON serializable object. You can use
HazelcastJsonValue
s both as keys and values in the distributed
data structures. Then, it is possible to query these objects using the
Hazelcast query methods explained in this section.
person1 = "{ \"name\": \"John\", \"age\": 35 }"
person2 = "{ \"name\": \"Jane\", \"age\": 24 }"
person3 = {"name": "Trey", "age": 17}
id_person_map = client.get_map("json-values").blocking()
# From JSON string
id_person_map.put(1, HazelcastJsonValue(person1))
id_person_map.put(2, HazelcastJsonValue(person2))
# From JSON serializable object
id_person_map.put(3, HazelcastJsonValue(person3))
people_under_21 = id_person_map.values(less("age", 21))
When running the queries, Hazelcast treats values extracted from the
JSON documents as Java types so they can be compared with the query
attribute. JSON specification defines five primitive types to be used in
the JSON documents: number
,string
, true
, false
and
null
. The string
, true/false
and null
types are treated
as String
, boolean
and null
, respectively. We treat the
extracted number
values as long
s if they can be represented by
a long
. Otherwise, number
s are treated as double
s.
It is possible to query nested attributes and arrays in the JSON
documents. The query syntax is the same as querying other Hazelcast
objects using the Predicate
s.
# Sample JSON object
# {
# "departmentId": 1,
# "room": "alpha",
# "people": [
# {
# "name": "Peter",
# "age": 26,
# "salary": 50000
# },
# {
# "name": "Jonah",
# "age": 50,
# "salary": 140000
# }
# ]
# }
# The following query finds all the departments that have a person named "Peter" working in them.
department_with_peter = departments.values(equal("people[any].name", "Peter"))
HazelcastJsonValue
is a lightweight wrapper around your JSON
strings. It is used merely as a way to indicate that the contained
string should be treated as a valid JSON value. Hazelcast does not check
the validity of JSON strings put into to the maps. Putting an invalid
JSON string into a map is permissible. However, in that case whether
such an entry is going to be returned or not from a query is not
defined.
Metadata Creation for JSON Querying¶
Hazelcast stores a metadata object per JSON serialized object stored.
This metadata object is created every time a JSON serialized object is
put into an Map
. Metadata is later used to speed up the query
operations. Metadata creation is on by default. Depending on your
application’s needs, you may want to turn off the metadata creation to
decrease the put latency and increase the throughput.
You can configure this using metadata-policy
element for the map
configuration on the member side as follows:
<hazelcast>
...
<map name="map-a">
<!--
valid values for metadata-policy are:
- OFF
- CREATE_ON_UPDATE (default)
-->
<metadata-policy>OFF</metadata-policy>
</map>
...
</hazelcast>
Filtering with Paging Predicates¶
Hazelcast Python client provides paging for defined predicates. With its
PagingPredicate
, you can get a collection of keys, values, or
entries page by page by filtering them with predicates and giving the
size of the pages. Also, you can sort the entries by specifying
comparators. In this case, the comparator should be either Portable
or IdentifiedDataSerializable
and the serialization factory
implementations should be registered on the member side. Please note
that, paging is done on the cluster members. Hence, client only sends a
marker comparator to indicate members which comparator to use. The
comparision logic must be defined on the member side by implementing the
java.util.Comparator<Map.Entry>
interface.
Paging predicates require the objects to be deserialized on the member side from which the collection is retrieved. Therefore, you need to register the serialization factories you use on all the members on which the paging predicates are used. See the Adding User Library to CLASSPATH section for more details.
In the example code below:
The
greater_or_equal
predicate gets values from thestudents
map. This predicate has a filter to retrieve the objects with anage
greater than or equal to18
.Then a
PagingPredicate
is constructed in which the page size is5
, so that there are five objects in each page. The first time thevalues()
method is called, the first page is fetched.Finally, the subsequent page is fetched by calling the
next_page()
method ofPagingPredicate
and querying the map again with the updatedPagingPredicate
.
from hazelcast.predicate import paging, greater_or_equal
...
m = client.get_map("students").blocking()
predicate = paging(greater_or_equal("age", 18), 5)
# Retrieve the first page
values = m.values(predicate)
...
# Set up next page
predicate.next_page()
# Retrieve next page
values = m.values(predicate)
If a comparator is not specified for PagingPredicate
, but you want
to get a collection of keys or values page by page, keys or values must
implement the java.lang.Comparable
interface on the member side.
Otherwise, paging fails with an exception from the server. Luckily, a lot
of types implement the Comparable
interface by
default,
including the primitive types, so, you may use values of types int
,
float
, str
etc. in paging without specifying a comparator on the
Python client.
You can also access a specific page more easily by setting the
predicate.page
attribute before making the remote call. This way, if
you make a query for the hundredth page, for example, it gets all
100
pages at once instead of reaching the hundredth page one by one
using the next_page()
method.
Note
PagingPredicate
, also known as Order & Limit, is not supported in
Transactional Context.
Aggregations¶
Aggregations allow computing a value of some function (e.g sum
or max
)
over the stored map entries. The computation is performed in a fully
distributed manner, so no data other than the computed function value is
transferred to the client, making the computation fast.
The aggregator
module provides a wide variety of built-in aggregators. The
full list is presented below:
count
distinct
double_avg
double_sum
fixed_point_sum
floating_point_sum
int_avg
int_sum
long_avg
long_sum
max_
min_
number_avg
max_by
max_by
These aggregators are used with the map.aggregate
function, which takes an
optional predicate argument.
See the following example.
import hazelcast
from hazelcast.aggregator import count, number_avg
from hazelcast.predicate import greater_or_equal
client = hazelcast.HazelcastClient()
employees = client.get_map("employees").blocking()
employees.put("John Stiles", 23)
employees.put("Judy Doe", 29)
employees.put("Richard Miles", 38)
employee_count = employees.aggregate(count())
# Prints:
# There are 3 employees
print("There are %d employees" % employee_count)
# Run count with predicate
employee_count = employees.aggregate(count(), greater_or_equal("this", 25))
# Prints:
# There are 2 employees older than 24
print("There are %d employees older than 24" % employee_count)
# Run average aggregate
average_age = employees.aggregate(number_avg())
# Prints:
# Average age is 30
print("Average age is %f" % average_age)
Projections¶
There are cases where instead of sending all the data returned by a query from the server, you want to transform (strip down) each result object in order to avoid redundant network traffic.
For example, you select all employees based on some criteria, but you just want to return their name instead of the whole object. It is easily doable with the Projections.
The projection
module provides three projection functions:
single_attribute
: Extracts a single attribute from an object and returns it.multi_attribute
: Extracts multiple attributes from an object and returns them as alist
.identity
: Returns the object as it is.
These projections are used with the map.project
function, which takes an
optional predicate argument.
See the following example.
import hazelcast
from hazelcast.core import HazelcastJsonValue
from hazelcast.predicate import greater
from hazelcast.projection import single_attribute, multi_attribute
client = hazelcast.HazelcastClient()
employees = client.get_map("employees").blocking()
employees.put(1, HazelcastJsonValue({"age": 25, "height": 180, "weight": 60}))
employees.put(2, HazelcastJsonValue({"age": 21, "height": 170, "weight": 70}))
employees.put(3, HazelcastJsonValue({"age": 40, "height": 175, "weight": 75}))
ages = employees.project(single_attribute("age"))
# Prints: "Ages of the employees are [21, 25, 40]"
print("Ages of the employees are %s" % ages)
filtered_ages = employees.project(single_attribute("age"), greater("age", 23))
# Prints: "Ages of the filtered employees are [25, 40]"
print("Ages of the filtered employees are %s" % filtered_ages)
attributes = employees.project(multi_attribute("age", "height"))
# Prints: "Ages and heights of the employees are [[21, 170], [25, 180], [40, 175]]"
print("Ages and heights of the employees are %s" % attributes)
Performance¶
Near Cache¶
Map entries in Hazelcast are partitioned across the cluster members.
Hazelcast clients do not have local data at all. Suppose you read the
key k
a number of times from a Hazelcast client and k
is owned
by a member in your cluster. Then each map.get(k)
will be a remote
operation, which creates a lot of network trips. If you have a map that
is mostly read, then you should consider creating a local Near Cache, so
that reads are sped up and less network traffic is created.
These benefits do not come for free, please consider the following trade-offs:
Clients with a Near Cache will have to hold the extra cached data, which increases their memory consumption.
If invalidation is enabled and entries are updated frequently, then invalidations will be costly.
Near Cache breaks the strong consistency guarantees; you might be reading stale data.
Near Cache is highly recommended for maps that are mostly read.
Configuring Near Cache¶
The following snippet show how a Near Cache is configured in the Python
client using the near_caches
argument, presenting all available
values for each element. When an element is missing from the
configuration, its default value is used.
from hazelcast.config import InMemoryFormat, EvictionPolicy
client = hazelcast.HazelcastClient(
near_caches={
"mostly-read-map": {
"invalidate_on_change": True,
"time_to_live": 60,
"max_idle": 30,
# You can also set these to "OBJECT"
# and "LRU" without importing anything.
"in_memory_format": InMemoryFormat.OBJECT,
"eviction_policy": EvictionPolicy.LRU,
"eviction_max_size": 100,
"eviction_sampling_count": 8,
"eviction_sampling_pool_size": 16
}
}
)
Following are the descriptions of all configuration elements:
in_memory_format
: Specifies in which format data will be stored in your Near Cache. Note that a map’s in-memory format can be different from that of its Near Cache. Available values are as follows:BINARY
: Data will be stored in serialized binary format (default value).OBJECT
: Data will be stored in deserialized format.
invalidate_on_change
: Specifies whether the cached entries are evicted when the entries are updated or removed. Its default value isTrue
.time_to_live
: Maximum number of seconds for each entry to stay in the Near Cache. Entries that are older than this period are automatically evicted from the Near Cache. Regardless of the eviction policy used,time_to_live_seconds
still applies. Any non-negative number can be assigned. Its default value isNone
.None
means infinite.max_idle
: Maximum number of seconds each entry can stay in the Near Cache as untouched (not read). Entries that are not read more than this period are removed from the Near Cache. Any non-negative number can be assigned. Its default value isNone
.None
means infinite.eviction_policy
: Eviction policy configuration. Available values are as follows:LRU
: Least Recently Used (default value).LFU
: Least Frequently Used.NONE
: No items are evicted and theeviction_max_size
property is ignored. You still can combine it withtime_to_live
andmax_idle
to evict items from the Near Cache.RANDOM
: A random item is evicted.
eviction_max_size
: Maximum number of entries kept in the memory before eviction kicks in.eviction_sampling_count
: Number of random entries that are evaluated to see if some of them are already expired. If there are expired entries, those are removed and there is no need for eviction.eviction_sampling_pool_size
: Size of the pool for eviction candidates. The pool is kept sorted according to eviction policy. The entry with the highest score is evicted.
Near Cache Example for Map¶
The following is an example configuration for a Near Cache defined in
the mostly-read-map
map. According to this configuration, the
entries are stored as OBJECT
’s in this Near Cache and eviction
starts when the count of entries reaches 5000
; entries are evicted
based on the LRU
(Least Recently Used) policy. In addition, when an
entry is updated or removed on the member side, it is eventually evicted
on the client side.
client = hazelcast.HazelcastClient(
near_caches={
"mostly-read-map": {
"invalidate_on_change": True,
"in_memory_format": InMemoryFormat.OBJECT,
"eviction_policy": EvictionPolicy.LRU,
"eviction_max_size": 5000,
}
}
)
Near Cache Eviction¶
In the scope of Near Cache, eviction means evicting (clearing) the
entries selected according to the given eviction_policy
when the
specified eviction_max_size
has been reached.
The eviction_max_size
defines the entry count when the Near Cache is
full and determines whether the eviction should be triggered.
Once the eviction is triggered, the configured eviction_policy
determines which, if any, entries must be evicted.
Near Cache Expiration¶
Expiration means the eviction of expired records. A record is expired:
If it is not touched (accessed/read) for
max_idle
secondstime_to_live
seconds passed since it is put to Near Cache
The actual expiration is performed when a record is accessed: it is
checked if the record is expired or not. If it is expired, it is evicted
and KeyError
is raised to the caller.
Near Cache Invalidation¶
Invalidation is the process of removing an entry from the Near Cache when its value is updated or it is removed from the original map (to prevent stale reads). See the Near Cache Invalidation section in the Hazelcast Reference Manual.
Monitoring and Logging¶
Enabling Client Statistics¶
You can monitor your clients using Hazelcast Management Center.
As a prerequisite, you need to enable the client statistics before
starting your clients. There are two arguments of HazelcastClient
related to client statistics:
statistics_enabled
: If set toTrue
, it enables collecting the client statistics and sending them to the cluster. When it isTrue
you can monitor the clients that are connected to your Hazelcast cluster, using Hazelcast Management Center. Its default value isFalse
.statistics_period
: Period in seconds the client statistics are collected and sent to the cluster. Its default value is3
.
You can enable client statistics and set a non-default period in seconds as follows:
client = hazelcast.HazelcastClient(
statistics_enabled=True,
statistics_period=4
)
Hazelcast Python client can collect statistics related to the client and Near Caches without an extra dependency. However, to get the statistics about the runtime and operating system, psutil is used as an extra dependency.
If the psutil
is installed, runtime and operating system statistics
will be sent to cluster along with statistics related to the client and
Near Caches. If not, only the client and Near Cache statistics will be
sent.
psutil
can be installed independently or with the Hazelcast Python
client as follows:
From PyPI
pip install hazelcast-python-client[stats]
From source
pip install -e .[stats]
After enabling the client statistics, you can monitor your clients using Hazelcast Management Center. Please refer to the Monitoring Clients section in the Hazelcast Management Center Reference Manual for more information on the client statistics.
Logging Configuration¶
Hazelcast Python client uses Python’s builtin logging
package to
perform logging.
All the loggers used throughout the client are identified by their
module names. Hence, one may configure the hazelcast
parent logger
and use the same configuration for the child loggers such as
hazelcast.lifecycle
without an extra effort.
Below is an example of the logging configuration with INFO
log level
and a StreamHandler
with a custom format, and its output.
import logging
import hazelcast
logger = logging.getLogger("hazelcast")
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
client = hazelcast.HazelcastClient()
client.shutdown()
Output
2020-10-16 13:31:35,605 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is STARTING
2020-10-16 13:31:35,605 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is STARTED
2020-10-16 13:31:35,605 - hazelcast.connection - INFO - Trying to connect to Address(host=127.0.0.1, port=5701)
2020-10-16 13:31:35,622 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is CONNECTED
2020-10-16 13:31:35,622 - hazelcast.connection - INFO - Authenticated with server Address(host=172.17.0.2, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, server version: 4.0, local address: Address(host=127.0.0.1, port=56752)
2020-10-16 13:31:35,623 - hazelcast.cluster - INFO -
Members [1] {
Member [172.17.0.2]:5701 - 7682c357-3bec-4841-b330-6f9ae0c08253
}
2020-10-16 13:31:35,624 - hazelcast.client - INFO - Client started
2020-10-16 13:31:35,624 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is SHUTTING_DOWN
2020-10-16 13:31:35,624 - hazelcast.connection - INFO - Removed connection to Address(host=127.0.0.1, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, connection: Connection(id=0, live=False, remote_address=Address(host=172.17.0.2, port=5701))
2020-10-16 13:31:35,624 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is DISCONNECTED
2020-10-16 13:31:35,634 - hazelcast.lifecycle - INFO - HazelcastClient 4.0.0 is SHUTDOWN
A handy alternative to above example would be configuring the root
logger using the logging.basicConfig()
utility method. Beware that,
every logger is the child of the root logger in Python. Hence,
configuring the root logger may have application level impact.
Nonetheless, it is useful for the testing or development purposes.
import logging
import hazelcast
logging.basicConfig(level=logging.INFO)
client = hazelcast.HazelcastClient()
client.shutdown()
Output
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is STARTING
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is STARTED
INFO:hazelcast.connection:Trying to connect to Address(host=127.0.0.1, port=5701)
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is CONNECTED
INFO:hazelcast.connection:Authenticated with server Address(host=172.17.0.2, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, server version: 4.0, local address: Address(host=127.0.0.1, port=56758)
INFO:hazelcast.cluster:
Members [1] {
Member [172.17.0.2]:5701 - 7682c357-3bec-4841-b330-6f9ae0c08253
}
INFO:hazelcast.client:Client started
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is SHUTTING_DOWN
INFO:hazelcast.connection:Removed connection to Address(host=127.0.0.1, port=5701):7682c357-3bec-4841-b330-6f9ae0c08253, connection: Connection(id=0, live=False, remote_address=Address(host=172.17.0.2, port=5701))
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is DISCONNECTED
INFO:hazelcast.lifecycle:HazelcastClient 4.0.0 is SHUTDOWN
To learn more about the logging
package and its capabilities, please
see the logging
cookbook and
documentation of
the logging
package.
Defining Client Labels¶
Through the client labels, you can assign special roles for your clients and use these roles to perform some actions specific to those client connections.
You can also group your clients using the client labels. These client groups can be blacklisted in Hazelcast Management Center so that they can be prevented from connecting to a cluster. See the related section in the Hazelcast Management Center Reference Manual for more information on this topic.
You can define the client labels using the labels
config option. See
the below example.
client = hazelcast.HazelcastClient(
labels=[
"role admin",
"region foo"
]
)
Defining Client Name¶
Each client has a name associated with it. By default, it is set to
hz.client_${CLIENT_ID}
. Here CLIENT_ID
starts from 0
and it
is incremented by 1
for each new client. This id is incremented and
set by the client, so it may not be unique between different clients
used by different applications.
You can set the client name using the client_name
configuration
element.
client = hazelcast.HazelcastClient(
client_name="blue_client_0"
)
Configuring Load Balancer¶
Load Balancer configuration allows you to specify which cluster member to send next operation when queried.
If it is a Smart Client,
only the operations that are not key-based are routed to the member
that is returned by the LoadBalancer
. If it is not a smart client,
LoadBalancer
is ignored.
By default, client uses round robin load balancer which picks each
cluster member in turn. Also, the client provides random load balancer
which picks the next member randomly as the name suggests. You can use
one of them by setting the load_balancer
config option.
The following are example configurations.
from hazelcast.util import RandomLB
client = hazelcast.HazelcastClient(
load_balancer=RandomLB()
)
You can also provide a custom load balancer implementation to use
different load balancing policies. To do so, you should provide a class
that implements the LoadBalancer
s interface or extend the
AbstractLoadBalancer
class for that purpose and provide the load
balancer object into the load_balancer
config option.
SQL¶
This chapter provides information on how you can run SQL queries on a Hazelcast cluster using the Python client.
Hazelcast API¶
You can use SQL to query data in maps, Kafka topics, or a variety of file systems. Results can be sent directly to the client or inserted into maps or Kafka topics. For streaming queries, you can submit them to a cluster as jobs to run in the background.
Warning
The SQL feature is stabilized in 5.0 versions of the client and the Hazelcast platform. In order for the client and the server to be fully compatible with each other, their major versions must be the same.
Note
In order to use SQL service from the Python client, Jet engine must be
enabled on the members and the hazelcast-sql
module must be in the
classpath of the members.
If you are using the CLI, Docker image, or distributions to start Hazelcast members, then you don’t need to do anything, as the above preconditions are already satisfied for such members.
However, if you are using Hazelcast members in the embedded mode, or
receiving errors saying that The Jet engine is disabled
or Cannot
execute SQL query because "hazelcast-sql" module is not in the classpath.
while executing queries, enable the Jet engine following one of the
instructions pointed out in the error message, or add the hazelcast-sql
module to your member’s classpath.
Supported Queries¶
Ad-Hoc Queries
Query large datasets either in one or multiple systems and/or run aggregations on them to get deeper insights.
See the Get Started with SQL Over Maps tutorial for reference.
Streaming Queries
Also known as continuous queries, these keep an open connection to a streaming data source and run a continuous query to get near real-time updates.
See the Get Started with SQL Over Kafka tutorial for reference.
Federated Queries
Query different datasets such as Kafka topics and Hazelcast maps, using a single query. Normally, querying in SQL is database or dataset-specific. However, with Mappings, you can pull information from different sources to present a more complete picture.
See the Get Started with SQL Over Files tutorial for reference.
Mappings¶
To connect to data sources and query them as if they were tables, the SQL service uses a concept called mappings.
Mappings store essential metadata about the source’s data model, data access patterns, and serialization formats so that the SQL service can connect to the data source and query it.
You can create mappings for the following data sources by using the CREATE MAPPING statement:
Querying Map¶
With SQL you can query the keys and values of maps in your cluster.
Assume that we have a map called employees
that contains values of type
Employee
:
class Employee(Portable):
def __init__(self, name=None, age=None):
self.name = name
self.age = age
def write_portable(self, writer):
writer.write_string("name", self.name)
writer.write_int("age", self.age)
def read_portable(self, reader):
self.name = reader.read_string("name")
self.age = reader.read_int("age")
def get_factory_id(self):
return 1
def get_class_id(self):
return 2
employees = client.get_map("employees").blocking()
employees.set(1, Employee("John Doe", 33))
employees.set(2, Employee("Jane Doe", 29))
Before starting to query data, we must create a mapping for the employees
map. The details of CREATE MAPPING
statement is discussed in the
reference manual. For
the Employee
class above, the mapping statement is shown below. It is
enough to create the mapping once per map.
client.sql.execute(
"""
CREATE MAPPING employees (
__key INT,
name VARCHAR,
age INT
)
TYPE IMap
OPTIONS (
'keyFormat' = 'int',
'valueFormat' = 'portable',
'valuePortableFactoryId' = '1',
'valuePortableClassId' = '2'
)
"""
).result()
The following code prints names of the employees whose age is less than 30
:
result = client.sql.execute("SELECT name FROM employees WHERE age < 30").result()
for row in result:
name = row["name"]
print(name)
The following subsections describe how you can access Hazelcast maps and perform queries on them in more details.
Case Sensitivity
Mapping names and field names are case-sensitive.
For example, you can access an employees
map as employees
but not as
Employees
.
Key and Value Objects
A map entry consists of a key and a value. These are accessible through
the __key
and this
aliases. The following query returns the keys and
values of all entries in the map:
SELECT __key, this FROM employees
“SELECT *” Queries
You may use the SELECT * FROM <table>
syntax to get all the table fields.
The __key
and this
fields are returned by the SELECT *
queries if
they do not have nested fields. For the employees
map, the following query
does not return the this
field, because the value has nested fields
name
and age
:
-- Returns __key, name, age
SELECT * FROM employee
Key and Value Fields
You may also access the nested fields of a key or a value. The list of exposed fields depends on the serialization format, as described Querying Maps with SQL section.
Using Query Parameters
You can use query parameters to build safer and faster SQL queries.
A query parameter is a piece of information that you supply to a query before you run it. Parameters can be used by themselves or as part of a larger expression to form a criterion in the query.
age_to_compare = 30
client.sql.execute("SELECT * FROM employees WHERE age > ?", age_to_compare).result()
Instead of putting data straight into an SQL statement, you use the ?
placeholder in your client code to indicate that you will replace that
placeholder with a parameter.
Query parameters have the following benefits:
Faster execution of similar queries. If you submit more than one query where only a value changes, the SQL service uses the cached query plan from the first query rather than optimizing each query again.
Protection against SQL injection. If you use query parameters, you don’t need to escape special characters in user-provided strings.
Querying JSON Objects¶
In Hazelcast, the SQL service supports the following ways of working with JSON data:
json
: Maps JSON data to a single column ofJSON
type where you can use JsonPath syntax to query and filter it, including nested levels.json-flat
: Maps JSON top-level fields to columns with non-JSON types where you can query only top-level keys.
json
To query json
objects, you should create an explicit mapping using the
CREATE MAPPING
statement, similar to the example above.
For example, this code snippet creates a mapping to a new map called
json_employees
, which stores the JSON values as HazelcastJsonValue
objects and queries it using nested fields, which is not possible with the
json-flat
type:
client.sql.execute(
"""
CREATE OR REPLACE MAPPING json_employees
TYPE IMap
OPTIONS (
'keyFormat' = 'int',
'valueFormat' = 'json'
)
"""
).result()
json_employees = client.get_map("json_employees").blocking()
json_employees.set(
1,
HazelcastJsonValue(
{
"personal": {"name": "John Doe"},
"job": {"salary": 60000},
}
),
)
json_employees.set(
2,
HazelcastJsonValue(
{
"personal": {"name": "Jane Doe"},
"job": {"salary": 80000},
}
),
)
with client.sql.execute(
"""
SELECT JSON_VALUE(this, '$.personal.name') AS name
FROM json_employees
WHERE JSON_VALUE(this, '$.job.salary' RETURNING INT) > ?
""",
75000,
).result() as result:
for row in result:
print(f"Name: {row['name']}")
The json
data type comes with full support for querying JSON in maps and
Kafka topics.
JSON Functions
Hazelcast supports the following functions, which can retrieve JSON data.
JSON_QUERY : Extracts a JSON value from a JSON document or a JSON-formatted string that matches a given JsonPath expression.
JSON_VALUE : Extracts a primitive value, such as a string, number, or boolean that matches a given JsonPath expression. This function returns
NULL
if a non-primitive value is matched, unless theON ERROR
behavior is changed.JSON_ARRAY : Returns a JSON array from a list of input data.
JSON_OBJECT : Returns a JSON object from the given key/value pairs.
json-flat
To query json-flat
objects, you should create an explicit mapping using the
CREATE MAPPING
statement, similar to the example above.
For example, this code snippet creates a mapping to a new map called
json_flat_employees
, which stores the JSON values with columns name
and salary
as HazelcastJsonValue
objects and queries it using
top-level fields:
client.sql.execute(
"""
CREATE OR REPLACE MAPPING json_flat_employees (
__key INT,
name VARCHAR,
salary INT
)
TYPE IMap
OPTIONS (
'keyFormat' = 'int',
'valueFormat' = 'json-flat'
)
"""
).result()
json_flat_employees = client.get_map("json_flat_employees").blocking()
json_flat_employees.set(
1,
HazelcastJsonValue(
{
"name": "John Doe",
"salary": 60000,
}
),
)
json_flat_employees.set(
2,
HazelcastJsonValue(
{
"name": "Jane Doe",
"salary": 80000,
}
),
)
with client.sql.execute(
"""
SELECT name
FROM json_flat_employees
WHERE salary > ?
""",
75000,
).result() as result:
for row in result:
print(f"Name: {row['name']}")
Note that, in json-flat
type, top-level columns must be explicitly
specified while creating the mapping.
The json-flat
format comes with partial support for querying JSON in maps,
Kafka topics, and files.
For more information about working with JSON using SQL see Working with JSON in Hazelcast reference manual.
SQL Statements¶
Data Manipulation Language(DML) Statements
SELECT: Read data from a table.
SINK INTO/INSERT INTO: Ingest data into a map and/or forward data to other systems.
UPDATE: Overwrite values in map entries.
DELETE: Delete map entries.
Data Definition Language(DDL) Statements
CREATE MAPPING: Map a local or remote data object to a table that Hazelcast can access.
SHOW MAPPINGS: Get the names of existing mappings.
DROP MAPPING: Remove a mapping.
Job Management Statements
CREATE JOB: Create a job that is not tied to the client session.
ALTER JOB: Restart, suspend, or resume a job.
SHOW JOBS: Get the names of all running jobs.
DROP JOB: Cancel a job.
CREATE OR REPLACE SNAPSHOT (Enterprise only): Create a snapshot of a running job, so you can stop and restart it at a later date.
DROP SNAPSHOT (Enterprise only): Cancel a running job.
Data Types¶
The SQL service supports a set of SQL data types. Every data type is mapped to a Python type that represents the type’s value.
Type Name |
Python Type |
---|---|
BOOLEAN |
bool |
VARCHAR |
str |
TINYINT |
int |
SMALLINT |
int |
INTEGER |
int |
BIGINT |
int |
DECIMAL |
decimal.Decimal |
REAL |
float |
DOUBLE |
float |
DATE |
datetime.date |
TIME |
datetime.time |
TIMESTAMP |
datetime.datetime |
TIMESTAMP_WITH_TIME_ZONE |
datetime.datetime (with non-None tzinfo) |
OBJECT |
Any Python type |
JSON |
HazelcastJsonValue |
Functions and Operators¶
Hazelcast supports logical and IS
predicates, comparison and mathematical
operators, and aggregate, mathematical, trigonometric, string, table-valued,
and special functions.
See the Reference Manual for details.
Improving the Performance of SQL Queries¶
You can improve the performance of queries over maps by indexing map entries.
To find out more about indexing map entries, see
add_index()
method.
If you find that your queries lead to out of memory exceptions (OOME), consider decreasing the value of the Jet engine’s max-processor-accumulated-records option.
Limitations¶
SQL has the following limitations. We plan to remove these limitations in future releases.
You cannot run SQL queries on lite members.
The only supported Hazelcast data structure is map. You cannot query other data structures such as replicated maps.
Limited support for joins. See Join Tables.
DBAPI-2 Interface¶
hazelcast.db module supports the Python standard DBAPI-2 Specification.
Connection¶
The connect()
function creates a connection to the cluster
and returns a Connection
object.
from hazelcast.db import connect
conn = connect()
The connect()
function connects to the default cluster by default.
There are a few ways to pass the connection parameters.
You can use the following keyword arguments:
host
: Host part of the cluster address, by default:localhost
.port
: Port part of the cluster address, by default:5701
.cluster_name
: Cluster name, by default:dev
.user
: Username for the cluster. Requires Hazelcast EE.password
: Password for the cluster. Requires Hazelcast EE.
from hazelcast.db import connect
conn = connect(user="localhost", port=5701)
You can also provide a DSN (Data Source Name) string to configure the connection.
The format of the DSN is hz://[user:password]@address1:port1[,address2:port2, ...][?option1=value1[&option2=value2 ...]]
The following options are supported:
cluster.name
: Hazelcast cluster name.cloud.token
: Viridian discovery token.smart
: Enables smart routing when true. Defaults to Python client default.ssl
: Enables SSL for client connection.ssl.ca.path
: Path to the CA file.ssl.cert.path
: Path to the certificate file.ssl.key.path
: Path to the private key file.ssl.key.password
: Password to the key file.
from hazelcast.db import connect
conn = connect(dsn="hz://admin:ssap@demo.hazelcast.com?cluster.name=demo1")
In case you have to pass some options which are not supported by the methods above,
you can also pass a hazelcast.config.Config
object as the first argument to connect
.
from hazelcast.db import connect
from hazelcast.config import Config
config = Config()
config.compact_serializers = [AddressSerializer()]
conn = connect(config)
Once the connection is created, you can create a hazelcast.db.Cursor
object from it
to execute queries. This is explained in the next section.
Finally, you can close the Connection
object to release its resources if you are done with it.
conn.close()
You can use a with
statement to automatically close a Connection
.
from hazelcast.db import connect
with connect() as conn:
# use conn in this block
# conn is automatically closed here
Cursors¶
The first step of executing a query is, getting a hazelcast.db.Cursor
from the connection.
cursor = conn.cursor()
Then, you can execute a SQL query using the hazelcast.db.Cursor.execute()
method.
You can use this method to run all kinds of queries.
cursor.execute("SELECT * FROM stocks ORDER BY price")
Use the question mark (?
) as a placeholder if you are passing arguments
in the query. The actual arguments should be passed in a tuple.
cursor.execute("SELECT * FROM stocks WHERE price > ? ORDER BY price", (50,))
hazelcast.db.Cursor.executemany()
is also available, which enables running the same query
with different kinds of value sets. This method should only be used my mutating queries, such as INSERT
.
data = [
(1, "2006-03-28", "BUY", "IBM", 1000, 45.0),
(2, "2006-04-05", "BUY", "MSFT", 1000, 72.0),
(3, "2006-04-06", "SELL", "IBM", 500, 53.0),
]
cursor.executemany("INSERT INTO stocks VALUES(?, CAST(? AS DATE), ?, ?, ?, ?)", data)
Mutating Queries
Mutating queries such as UPDATE
, DELETE
and INSERT
updates, deletes
data or adds new rows. You can use execue
or executemany
for those
queries.
cursor.execute("INSERT INTO stocks(__key, price) VALUES(10, 40)")
Row Returning Queries
Queries such as SELECT
and SHOW
return rows. Once you run execute
with the query, call one of hazelcast.db.Cursor.fetchone()
,
hazelcast.db.Cursor.fetchmany()
or hazelcast.db.Cursor.fetchall()
to get one, some or all rows in the result. The rows are of the
hazelcast.sql.SqlRow
type. Note that, fetchall
should only be used
for small, finite set of rows.
cursor.execute("SELECT * FROM stocks")
one_row = cursor.fetchone()
three_more_rows = cursor.fetchmany(3)
rest_of_rows = cursor.fetchall()
Alternatively, you can iterate on the cursor itself.
cursor.execute("SELECT * FROM stocks")
for row in cursor:
# handle the row
You can access columns in a hazelcast.sql.SqlRow
by using the subscription
notation, treating the row as a dictionary.
for row in cursor:
print(row["__key"], row["symbol"], row["price"])
Alternatively, you can treat the row as an array and use indexes to access column values.
for row in cursor:
print(row[0], row[1], row[2])
Once you are done with the cursor, you can use its
hazelcast.db.Cursor.close()
method to release its resources.
cursor.close()
Using the with
statement, close
is called automatically:
with conn.cursor() as cursor:
cursor.execute("SELECT * FROM stocks")
for row in cursor:
# handle the row
# cursor is automatically closed here.
Securing Client Connection¶
This chapter describes the security features of Hazelcast Python client. These include using TLS/SSL for connections between members and between clients and members, mutual authentication, username/password authentication, token authentication and Kerberos authentication. These security features require Hazelcast Enterprise edition.
TLS/SSL¶
One of the offers of Hazelcast is the TLS/SSL protocol which you can use to establish an encrypted communication across your cluster with key stores and trust stores.
A Java
keyStore
is a file that includes a private key and a public certificate. The equivalent of a key store is the combination ofkeyfile
andcertfile
at the Python client side.A Java
trustStore
is a file that includes a list of certificates trusted by your application which is named certificate authority. The equivalent of a trust store is acafile
at the Python client side.
You should set keyStore
and trustStore
before starting the
members. See the next section on how to set keyStore
and
trustStore
on the server side.
TLS/SSL for Hazelcast Members¶
Hazelcast allows you to encrypt socket level communication between Hazelcast members and between Hazelcast clients and members, for end to end encryption. To use it, see the TLS/SSL for Hazelcast Members section.
TLS/SSL for Hazelcast Python Clients¶
TLS/SSL for the Hazelcast Python client can be configured using the
SSLConfig
class. Let’s first give an example of a sample
configuration and then go over the configuration options one by one:
from hazelcast.config import SSLProtocol
client = hazelcast.HazelcastClient(
ssl_enabled=True,
ssl_cafile="/home/hazelcast/cafile.pem",
ssl_certfile="/home/hazelcast/certfile.pem",
ssl_keyfile="/home/hazelcast/keyfile.pem",
ssl_password="keyfile-password",
# You can also set this to "TLSv1_3"
# without importing anything.
ssl_protocol=SSLProtocol.TLSv1_3,
ssl_ciphers="DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA",
ssl_check_hostname=True,
)
Enabling TLS/SSL¶
TLS/SSL for the Hazelcast Python client can be enabled/disabled using
the ssl_enabled
option. When this option is set to True
, TLS/SSL
will be configured with respect to the other SSL options. Setting this
option to False
will result in discarding the other SSL options.
The following is an example configuration:
client = hazelcast.HazelcastClient(
ssl_enabled=True
)
Default value is False
(disabled).
Setting CA File¶
Certificates of the Hazelcast members can be validated against
ssl_cafile
. This option should point to the absolute path of the
concatenated CA certificates in PEM format. When SSL is enabled and
ssl_cafile
is not set, a set of default CA certificates from default
locations will be used.
The following is an example configuration:
client = hazelcast.HazelcastClient(
ssl_cafile="/home/hazelcast/cafile.pem"
)
Setting Client Certificate¶
When mutual authentication is enabled on the member side, clients or other members should also provide a certificate file that identifies themselves. Then, Hazelcast members can use these certificates to validate the identity of their peers.
Client certificate can be set using the ssl_certfile
. This option
should point to the absolute path of the client certificate in PEM
format.
The following is an example configuration:
client = hazelcast.HazelcastClient(
ssl_certfile="/home/hazelcast/certfile.pem"
)
Setting Private Key¶
Private key of the ssl_certfile
can be set using the
ssl_keyfile
. This option should point to the absolute path of the
private key file for the client certificate in the PEM format.
If this option is not set, private key will be taken from
ssl_certfile
. In this case, ssl_certfile
should be in the
following format.
-----BEGIN RSA PRIVATE KEY-----
... (private key in base64 encoding) ...
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
... (certificate in base64 PEM encoding) ...
-----END CERTIFICATE-----
The following is an example configuration:
client = hazelcast.HazelcastClient(
ssl_keyfile="/home/hazelcast/keyfile.pem"
)
Setting Password of the Private Key¶
If the private key is encrypted using a password, ssl_password
will
be used to decrypt it. The ssl_password
may be a function to call to
get the password. In that case, it will be called with no arguments, and
it should return a string, bytes or bytearray. If the return value is a
string it will be encoded as UTF-8 before using it to decrypt the key.
Alternatively a string, bytes
or bytearray
value may be supplied
directly as the password.
The following is an example configuration:
client = hazelcast.HazelcastClient(
ssl_password="keyfile-password"
)
Setting the Protocol¶
ssl_protocol
can be used to select the protocol that will be used in
the TLS/SSL communication. Hazelcast Python client offers the following
protocols:
SSLv2 : SSL 2.0 Protocol. RFC 6176 prohibits the usage of SSL 2.0.
SSLv3 : SSL 3.0 Protocol. RFC 7568 prohibits the usage of SSL 3.0.
TLSv1 : TLS 1.0 Protocol described in RFC 2246
TLSv1_1 : TLS 1.1 Protocol described in RFC 4346
TLSv1_2 : TLS 1.2 Protocol described in RFC 5246
TLSv1_3 : TLS 1.3 Protocol described in RFC 8446
Note that TLSv1_3 requires at least Python 3.7 built with OpenSSL 1.1.1+.
These protocol versions can be selected using the ssl_protocol
as
follows:
from hazelcast.config import SSLProtocol
client = hazelcast.HazelcastClient(
ssl_protocol=SSLProtocol.TLSv1_3
)
Note that the Hazelcast Python client and the Hazelcast members should have the same protocol version in order for TLS/SSL to work. In case of the protocol mismatch, connection attempts will be refused.
Default value is SSLProtocol.TLSv1_2
.
Setting Cipher Suites¶
Cipher suites that will be used in the TLS/SSL communication can be set
using the ssl_ciphers
option. Cipher suites should be in the OpenSSL
cipher list format. More than one cipher suite can be set by separating
them with a colon.
TLS/SSL implementation will honor the cipher suite order. So, Hazelcast Python client will offer the ciphers to the Hazelcast members with the given order.
Note that, when this option is not set, all the available ciphers will be offered to the Hazelcast members with their default order.
The following is an example configuration:
client = hazelcast.HazelcastClient(
ssl_ciphers="DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA"
)
Checking Hostname¶
Warning
This feature requires Python 3.7 or newer.
During the TLS/SSL handshake, the client can verify that the hostname or the IP address of the member matches with the information provided in the Subject Alternative Name extension or Common Name field in the Subject field of the member’s certificate.
The hostname used during the verification process is the hostname of the configured member address in the client constructor.
By default, hostname verification is disabled, but it is highly encouraged to enable it to avoid certain types of attack vectors.
The following is an example configuration:
client = hazelcast.HazelcastClient(
ssl_check_hostname=True,
)
Mutual Authentication¶
As explained above, Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast clients have trust stores used to define which members they can trust.
Using mutual authentication, the clients also have their key stores and members have their trust stores so that the members can know which clients they can trust.
To enable mutual authentication, firstly, you need to set the following
property on the server side in the hazelcast.xml
file:
<network>
<ssl enabled="true">
<properties>
<property name="javax.net.ssl.mutualAuthentication">REQUIRED</property>
</properties>
</ssl>
</network>
You can see the details of setting mutual authentication on the server side in the Mutual Authentication section of the Hazelcast Reference Manual.
On the client side, you have to provide ssl_cafile
, ssl_certfile
and ssl_keyfile
on top of the other TLS/SSL configurations. See the
TLS/SSL for Hazelcast Python Clients
section for the details of these options.
Username/Password Authentication¶
You can protect your cluster using a username and password pair. In order to use it, enable it in member configuration:
<security enabled="true">
<member-authentication realm="passwordRealm"/>
<realms>
<realm name="passwordRealm">
<identity>
<username-password username="MY-USERNAME" password="MY-PASSWORD" />
</identity>
</realm>
</realms>
</security>
Then, on the client-side, set creds_username
and creds_password
in the configuration:
client = hazelcast.HazelcastClient(
creds_username="MY-USERNAME",
creds_password="MY-PASSWORD"
)
Check out the documentation on Password Credentials of the Hazelcast Documentation.
Token-Based Authentication¶
Python client supports token-based authentication via token providers.
A token provider is a class derived from hazelcast.security.TokenProvider
.
In order to use token based authentication, first define in the member configuration:
<security enabled="true">
<member-authentication realm="tokenRealm"/>
<realms>
<realm name="tokenRealm">
<identity>
<token>MY-SECRET</token>
</identity>
</realm>
</realms>
</security>
Using hazelcast.security.BasicTokenProvider
you can pass the given token the member:
token_provider = BasicTokenProvider("MY-SECRET")
client = hazelcast.HazelcastClient(
token_provider=token_provider
)
Kerberos Authentication¶
Python client supports Kerberos authentication with an external package. The package provides the necessary token provider that handles the authentication against the KDC (key distribution center) with the given credentials, receives and caches the ticket, and finally retrieves the token.
For more information and possible client and server configurations, refer to
the documentation of the
hazelcast-kerberos
package.
Development and Testing¶
If you want to help with bug fixes, develop new features or tweak the implementation to your application’s needs, you can follow the steps in this section.
Building and Using Client From Sources¶
Follow the below steps to build and install Hazelcast Python client from its source:
Clone the GitHub repository.
Run
python setup.py install
to install the Python client.
If you are planning to contribute:
Run
pip install -r requirements-dev.txt
to install development dependencies.Use black to reformat the code by running the
black --config black.toml .
command.Use mypy to check type annotations by running the
mypy hazelcast
command.Make sure that tests are passing by following the steps described in the Testing section.
Testing¶
In order to test Hazelcast Python client locally, you will need the following:
Java 8 or newer
Maven
Following commands starts the tests:
python run_tests.py
Test script automatically downloads hazelcast-remote-controller
and
Hazelcast. The script uses Maven to download those.
Getting Help¶
You can use the following channels for your questions and development/usage issues:
Contributing¶
Besides your development contributions as explained in the Development and Testing section, you can always open a pull request on this repository for your other requests.
License¶
Copyright¶
Copyright (c) 2008-2023, Hazelcast, Inc. All Rights Reserved.
Visit hazelcast.com for more information.