Table Of Contents

6.7. Go

import "github.com/bureau14/qdb-api-go"

Package qdb provides an api to a quasardb server

6.7.1. Index

6.7.2. Constants

6.7.3. func NeverExpires

func NeverExpires() time.Time

NeverExpires : return a time value corresponding to quasardb never expires value

6.7.4. func PreserveExpiration

func PreserveExpiration() time.Time

PreserveExpiration : return a time value corresponding to quasardb preserve expiration value

6.7.5. type BlobEntry

type BlobEntry struct {
    Entry
}

BlobEntry : blob data type

6.7.6. func (*BlobEntry) CompareAndSwap

func (entry *BlobEntry) CompareAndSwap(newValue []byte, newComparand []byte, expiry time.Time) ([]byte, error)

CompareAndSwap : Atomically compares the entry with comparand and updates it to new_value if, and only if, they match.

The function returns the original value of the entry in case of a mismatch. When it matches, no content is returned.
The entry must already exist.
Update will occur if and only if the content of the entry matches bit for bit the content of the comparand buffer.

6.7.7. func (BlobEntry) Get

func (entry BlobEntry) Get() ([]byte, error)

Get : Retrieve an entry’s content

If the entry does not exist, the function will fail and return 'alias not found' error.

6.7.8. func (BlobEntry) GetAndRemove

func (entry BlobEntry) GetAndRemove() ([]byte, error)

GetAndRemove : Atomically gets an entry from the quasardb server and removes it.

If the entry does not exist, the function will fail and return 'alias not found' error.

6.7.9. func (*BlobEntry) GetAndUpdate

func (entry *BlobEntry) GetAndUpdate(newContent []byte, expiry time.Time) ([]byte, error)

GetAndUpdate : Atomically gets and updates (in this order) the entry on the quasardb server.

The entry must already exist.

6.7.10. func (BlobEntry) GetNoAlloc

func (entry BlobEntry) GetNoAlloc(content []byte) (int, error)

GetNoAlloc : Retrieve an entry’s content to already allocated buffer

If the entry does not exist, the function will fail and return 'alias not found' error.
If the buffer is not large enough to hold the data, the function will fail
and return `buffer is too small`, content length will nevertheless be
returned with entry size so that the caller may resize its buffer and try again.

6.7.11. func (BlobEntry) Put

func (entry BlobEntry) Put(content []byte, expiry time.Time) error

Put : Creates a new entry and sets its content to the provided blob.

If the entry already exists the function will fail and will return 'alias already exists' error.
You can specify an expiry or use NeverExpires if you don’t want the entry to expire.

6.7.12. func (BlobEntry) RemoveIf

func (entry BlobEntry) RemoveIf(comparand []byte) error

RemoveIf : Atomically removes the entry on the server if the content matches.

The entry must already exist.
Removal will occur if and only if the content of the entry matches bit for bit the content of the comparand buffer.

6.7.13. func (*BlobEntry) Update

func (entry *BlobEntry) Update(newContent []byte, expiry time.Time) error

Update : Creates or updates an entry and sets its content to the provided blob.

If the entry already exists, the function will modify the entry.
You can specify an expiry or use NeverExpires if you don’t want the entry to expire.

6.7.14. type Cluster

type Cluster struct {
    HandleType
}

Cluster : An object permitting calls to a cluster

6.7.15. func (Cluster) PurgeAll

func (c Cluster) PurgeAll() error

PurgeAll : Removes irremediably all data from all the nodes of the cluster.

This function is useful when quasardb is used as a cache and is not the golden source.
This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code.
By default cluster does not allow this operation and the function returns a qdb_e_operation_disabled error.

6.7.16. func (Cluster) PurgeCache

func (c Cluster) PurgeCache() error

PurgeCache : Removes all cached data from all the nodes of the cluster.

This function is disabled on a transient cluster.
Prefer purge_all in this case.

This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code.

6.7.17. func (Cluster) TrimAll

func (c Cluster) TrimAll() error

TrimAll : Trims all data on all the nodes of the cluster.

Quasardb uses Multi-Version Concurrency Control (MVCC) as a foundation of its transaction engine. It will automatically clean up old versions as entries are accessed.
This call is not atomic: if the command cannot be dispatched on the whole cluster, it will be dispatched on as many nodes as possible and the function will return with a qdb_e_ok code.
Entries that are not accessed may not be cleaned up, resulting in increasing disk usage.

This function will request each nodes to trim all entries, release unused memory and compact files on disk.
Because this operation is I/O and CPU intensive it is not recommended to run it when the cluster is heavily used.

6.7.18. func (Cluster) WaitForStabilization

func (c Cluster) WaitForStabilization(timeout time.Duration) error

WaitForStabilization : Wait for all nodes of the cluster to be stabilized.

Takes a timeout value, in milliseconds.

6.7.19. type Compression

type Compression C.qdb_compression_t

Compression : compression parameter

const (
    CompNone Compression = C.qdb_comp_none
    CompFast Compression = C.qdb_comp_fast
    CompBest Compression = C.qdb_comp_best
)

Compression values:

CompNone : No compression.
CompFast : Maximum compression speed, potentially minimum compression ratio. This is currently the default.
CompBest : Maximum compression ratio, potentially minimum compression speed. This is currently not implemented.

6.7.20. type Encryption

type Encryption C.qdb_encryption_t

Encryption : encryption option

const (
    EncryptNone Encryption = C.qdb_crypt_none
    EncryptAES  Encryption = C.qdb_crypt_aes_gcm_256
)

Encryption values:

EncryptNone : No encryption.
EncryptAES : Uses aes gcm 256 encryption.

6.7.21. type Entry

type Entry struct {
    HandleType
}

Entry : cannot be constructed base type for composition

6.7.22. func (Entry) Alias

func (e Entry) Alias() string

Alias : Return an alias string of the object

6.7.23. func (Entry) AttachTag

func (e Entry) AttachTag(tag string) error

AttachTag : Adds a tag entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.
The tag may or may not exist.

6.7.24. func (Entry) AttachTags

func (e Entry) AttachTags(tags []string) error

AttachTags : Adds a collection of tags to a single entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The function will ignore existing tags.
The entry must exist.
The tag may or may not exist.

6.7.25. func (Entry) DetachTag

func (e Entry) DetachTag(tag string) error

DetachTag : Removes a tag from an entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.
The tag must exist.

6.7.26. func (Entry) DetachTags

func (e Entry) DetachTags(tags []string) error

DetachTags : Removes a collection of tags from a single entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.
The tags must exist.

6.7.27. func (Entry) ExpiresAt

func (e Entry) ExpiresAt(expiry time.Time) error

ExpiresAt : Sets the absolute expiration time of an entry.

Blobs and integers can have an expiration time and will be automatically removed by the cluster when they expire.

The absolute expiration time is the Unix epoch, that is, the number of milliseconds since 1 January 1970, 00:00::00 UTC.
To use a relative expiration time (that is expiration relative to the time of the call), use ExpiresFromNow.

To remove the expiration time of an entry, specify the value NeverExpires as ExpiryTime parameter.
Values in the past are refused, but the cluster will have a certain tolerance to account for clock skews.

6.7.28. func (Entry) ExpiresFromNow

func (e Entry) ExpiresFromNow(expiry time.Duration) error

ExpiresFromNow : Sets the expiration time of an entry, relative to the current time of the client.

Blobs and integers can have an expiration time and will automatically be removed by the cluster when they expire.

The expiration is relative to the current time of the machine.
To remove the expiration time of an entry or to use an absolute expiration time use ExpiresAt.

6.7.29. func (Entry) GetLocation

func (e Entry) GetLocation() (NodeLocation, error)

GetLocation : Returns the primary node of an entry.

The exact location of an entry should be assumed random and users should not bother about its location as the API will transparently locate the best node for the requested operation.
This function is intended for higher level APIs that need to optimize transfers and potentially push computation close to the data.

6.7.30. func (Entry) GetMetadata

func (e Entry) GetMetadata() (Metadata, error)

GetMetadata : Gets the meta-information about an entry, if it exists.

6.7.31. func (Entry) GetTagged

func (e Entry) GetTagged(tag string) ([]string, error)

GetTagged : Retrieves all entries that have the specified tag.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The tag must exist.
The complexity of this function is constant.

6.7.32. func (Entry) GetTags

func (e Entry) GetTags() ([]string, error)

GetTags : Retrieves all the tags of an entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.

6.7.33. func (Entry) HasTag

func (e Entry) HasTag(tag string) error

HasTag : Tests if an entry has the request tag.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.

6.7.34. func (Entry) Remove

func (e Entry) Remove() error

Remove : Removes an entry from the cluster, regardless of its type.

This call will remove the entry, whether it is a blob, integer, deque, stream.
It will properly untag the entry.
If the entry spawns on multiple entries or nodes (deques and streams) all blocks will be properly removed.

The call is ACID, regardless of the type of the entry and a transaction will be created if need be

6.7.35. type EntryType

type EntryType C.qdb_entry_type_t

EntryType : An enumeration representing possible entries type.

const (
    EntryUnitialized EntryType = C.qdb_entry_uninitialized
    EntryBlob        EntryType = C.qdb_entry_blob
    EntryInteger     EntryType = C.qdb_entry_integer
    EntryHSet        EntryType = C.qdb_entry_hset
    EntryTag         EntryType = C.qdb_entry_tag
    EntryDeque       EntryType = C.qdb_entry_deque
    EntryStream      EntryType = C.qdb_entry_stream
    EntryTS          EntryType = C.qdb_entry_ts
)

EntryType Values

EntryUnitialized : Uninitialized value.
EntryBlob : A binary large object (blob).
EntryInteger : A signed 64-bit integer.
EntryHSet : A distributed hash set.
EntryTag : A tag.
EntryDeque : A distributed double-entry queue (deque).
EntryTS : A distributed time series.
EntryStream : A distributed binary stream.

6.7.36. type ErrorType

type ErrorType C.qdb_error_t

ErrorType obfuscating qdb_error_t

const (
    Success                      ErrorType = C.qdb_e_ok
    Created                      ErrorType = C.qdb_e_ok_created
    ErrUninitialized             ErrorType = C.qdb_e_uninitialized
    ErrAliasNotFound             ErrorType = C.qdb_e_alias_not_found
    ErrAliasAlreadyExists        ErrorType = C.qdb_e_alias_already_exists
    ErrOutOfBounds               ErrorType = C.qdb_e_out_of_bounds
    ErrSkipped                   ErrorType = C.qdb_e_skipped
    ErrIncompatibleType          ErrorType = C.qdb_e_incompatible_type
    ErrContainerEmpty            ErrorType = C.qdb_e_container_empty
    ErrContainerFull             ErrorType = C.qdb_e_container_full
    ErrElementNotFound           ErrorType = C.qdb_e_element_not_found
    ErrElementAlreadyExists      ErrorType = C.qdb_e_element_already_exists
    ErrOverflow                  ErrorType = C.qdb_e_overflow
    ErrUnderflow                 ErrorType = C.qdb_e_underflow
    ErrTagAlreadySet             ErrorType = C.qdb_e_tag_already_set
    ErrTagNotSet                 ErrorType = C.qdb_e_tag_not_set
    ErrTimeout                   ErrorType = C.qdb_e_timeout
    ErrConnectionRefused         ErrorType = C.qdb_e_connection_refused
    ErrConnectionReset           ErrorType = C.qdb_e_connection_reset
    ErrUnstableCluster           ErrorType = C.qdb_e_unstable_cluster
    ErrTryAgain                  ErrorType = C.qdb_e_try_again
    ErrConflict                  ErrorType = C.qdb_e_conflict
    ErrNotConnected              ErrorType = C.qdb_e_not_connected
    ErrResourceLocked            ErrorType = C.qdb_e_resource_locked
    ErrSystemRemote              ErrorType = C.qdb_e_system_remote
    ErrSystemLocal               ErrorType = C.qdb_e_system_local
    ErrInternalRemote            ErrorType = C.qdb_e_internal_remote
    ErrInternalLocal             ErrorType = C.qdb_e_internal_local
    ErrNoMemoryRemote            ErrorType = C.qdb_e_no_memory_remote
    ErrNoMemoryLocal             ErrorType = C.qdb_e_no_memory_local
    ErrInvalidProtocol           ErrorType = C.qdb_e_invalid_protocol
    ErrHostNotFound              ErrorType = C.qdb_e_host_not_found
    ErrBufferTooSmall            ErrorType = C.qdb_e_buffer_too_small
    ErrNotImplemented            ErrorType = C.qdb_e_not_implemented
    ErrInvalidVersion            ErrorType = C.qdb_e_invalid_version
    ErrInvalidArgument           ErrorType = C.qdb_e_invalid_argument
    ErrInvalidHandle             ErrorType = C.qdb_e_invalid_handle
    ErrReservedAlias             ErrorType = C.qdb_e_reserved_alias
    ErrUnmatchedContent          ErrorType = C.qdb_e_unmatched_content
    ErrInvalidIterator           ErrorType = C.qdb_e_invalid_iterator
    ErrEntryTooLarge             ErrorType = C.qdb_e_entry_too_large
    ErrTransactionPartialFailure ErrorType = C.qdb_e_transaction_partial_failure
    ErrOperationDisabled         ErrorType = C.qdb_e_operation_disabled
    ErrOperationNotPermitted     ErrorType = C.qdb_e_operation_not_permitted
    ErrIteratorEnd               ErrorType = C.qdb_e_iterator_end
    ErrInvalidReply              ErrorType = C.qdb_e_invalid_reply
    ErrNoSpaceLeft               ErrorType = C.qdb_e_no_space_left
    ErrQuotaExceeded             ErrorType = C.qdb_e_quota_exceeded
    ErrAliasTooLong              ErrorType = C.qdb_e_alias_too_long
    ErrClockSkew                 ErrorType = C.qdb_e_clock_skew
    ErrAccessDenied              ErrorType = C.qdb_e_access_denied
    ErrLoginFailed               ErrorType = C.qdb_e_login_failed
    ErrColumnNotFound            ErrorType = C.qdb_e_column_not_found
    ErrQueryTooComplex           ErrorType = C.qdb_e_query_too_complex
    ErrInvalidCryptoKey          ErrorType = C.qdb_e_invalid_crypto_key
    ErrInvalidQuery              ErrorType = C.qdb_e_invalid_query
    ErrInvalidRegex              ErrorType = C.qdb_e_invalid_regex
)

Success : Success. Created : Success. A new entry has been created. ErrUninitialized : Uninitialized error. ErrAliasNotFound : Entry alias/key was not found. ErrAliasAlreadyExists : Entry alias/key already exists. ErrOutOfBounds : Index out of bounds. ErrSkipped : Skipped operation. Used in batches and transactions. ErrIncompatibleType : Entry or column is incompatible with the operation. ErrContainerEmpty : Container is empty. ErrContainerFull : Container is full. ErrElementNotFound : Element was not found. ErrElementAlreadyExists : Element already exists. ErrOverflow : Arithmetic operation overflows. ErrUnderflow : Arithmetic operation underflows. ErrTagAlreadySet : Tag is already set. ErrTagNotSet : Tag is not set. ErrTimeout : Operation timed out. ErrConnectionRefused : Connection was refused. ErrConnectionReset : Connection was reset. ErrUnstableCluster : Cluster is unstable. ErrTryAgain : Please retry. ErrConflict : There is another ongoing conflicting operation. ErrNotConnected : Handle is not connected. ErrResourceLocked : Resource is locked. ErrSystemRemote : System error on remote node (server-side). Please check errno or GetLastError() for actual error. ErrSystemLocal : System error on local system (client-side). Please check errno or GetLastError() for actual error. ErrInternalRemote : Internal error on remote node (server-side). ErrInternalLocal : Internal error on local system (client-side). ErrNoMemoryRemote : No memory on remote node (server-side). ErrNoMemoryLocal : No memory on local system (client-side). ErrInvalidProtocol : Protocol is invalid. ErrHostNotFound : Host was not found. ErrBufferTooSmall : Buffer is too small. ErrNotImplemented : Operation is not implemented. ErrInvalidVersion : Version is invalid. ErrInvalidArgument : Argument is invalid. ErrInvalidHandle : Handle is invalid. ErrReservedAlias : Alias/key is reserved. ErrUnmatchedContent : Content did not match. ErrInvalidIterator : Iterator is invalid. ErrEntryTooLarge : Entry is too large. ErrTransactionPartialFailure : Transaction failed partially. ErrOperationDisabled : Operation has not been enabled in cluster configuration. ErrOperationNotPermitted : Operation is not permitted. ErrIteratorEnd : Iterator reached the end. ErrInvalidReply : Cluster sent an invalid reply. ErrNoSpaceLeft : No more space on disk. ErrQuotaExceeded : Disk space quota has been reached. ErrAliasTooLong : Alias is too long. ErrClockSkew : Cluster

import “github.com/bureau14/qdb-api-go”

denied. ErrLoginFailed : Login failed. ErrColumnNotFound : Column was not found. ErrQueryTooComplex : Query is too complex. ErrInvalidCryptoKey : Security key is invalid.

6.7.37. func (ErrorType) Error

func (e ErrorType) Error() string

6.7.38. type HandleType

type HandleType struct {
}

HandleType : An opaque handle to internal API-allocated structures needed for maintaining connection to a cluster.

6.7.39. func MustSetupHandle

func MustSetupHandle(clusterURI string, timeout time.Duration) HandleType

MustSetupHandle : Setup an handle, panic on error

The handle is already opened with tcp protocol
The handle is already connected with the clusterURI string

Panic on error

6.7.40. func MustSetupSecuredHandle

func MustSetupSecuredHandle(clusterURI, clusterPublicKeyFile, userCredentialFile string, timeout time.Duration, encryption Encryption) HandleType

MustSetupSecuredHandle : Setup a secured handle, panic on error

The handle is already opened with tcp protocol
The handle is already secured with the cluster public key and the user credential files provided
(Note: the filenames are needed, not the content of the files)
The handle is already connected with the clusterURI string

6.7.41. func NewHandle

func NewHandle() (HandleType, error)

NewHandle : Create a new handle, return error if needed

The handle is already opened (not connected) with tcp protocol

6.7.42. func SetupHandle

func SetupHandle(clusterURI string, timeout time.Duration) (HandleType, error)

SetupHandle : Setup an handle, return error if needed

The handle is already opened with tcp protocol
The handle is already connected with the clusterURI string

6.7.43. func SetupSecuredHandle

func SetupSecuredHandle(clusterURI, clusterPublicKeyFile, userCredentialFile string, timeout time.Duration, encryption Encryption) (HandleType, error)

SetupSecuredHandle : Setup a secured handle, return error if needed

The handle is already opened with tcp protocol
The handle is already secured with the cluster public key and the user credential files provided
(Note: the filenames are needed, not the content of the files)
The handle is already connected with the clusterURI string

6.7.44. func (HandleType) APIBuild

func (h HandleType) APIBuild() string

APIBuild : Returns a string describing the exact API build.

6.7.45. func (HandleType) APIVersion

func (h HandleType) APIVersion() string

APIVersion : Returns a string describing the API version.

6.7.46. func (HandleType) AddClusterPublicKey

func (h HandleType) AddClusterPublicKey(clusterPublicKeyFile string) error

AddClusterPublicKey : add the cluster public key from a cluster config file.

6.7.47. func (HandleType) AddUserCredentials

func (h HandleType) AddUserCredentials(userCredentialFile string) error

AddUserCredentials : add a username and key from a user config file.

6.7.48. func (HandleType) Blob

func (h HandleType) Blob(alias string) BlobEntry

Blob : Create a blob entry object

6.7.49. func (HandleType) Close

func (h HandleType) Close() error

Close : Closes the handle previously opened.

This results in terminating all connections and releasing all internal buffers,
including buffers which may have been allocated as or a result of batch operations or get operations.

6.7.50. func (HandleType) Cluster

func (h HandleType) Cluster() *Cluster

Cluster : Create a cluster object to execute commands on a cluster

6.7.51. func (HandleType) Connect

func (h HandleType) Connect(clusterURI string) error

Connect : connect a previously opened handle

Binds the client instance to a quasardb cluster and connect to at least one node within.
Quasardb URI are in the form qdb://<address>:<port> where <address> is either an IPv4 or IPv6 (surrounded with square brackets), or a domain name. It is recommended to specify multiple addresses should the designated node be unavailable.

URI examples:
    qdb://myserver.org:2836 - Connects to myserver.org on the port 2836
    qdb://127.0.0.1:2836 - Connects to the local IPv4 loopback on the port 2836
    qdb://myserver1.org:2836,myserver2.org:2836 - Connects to myserver1.org or myserver2.org on the port 2836
    qdb://[::1]:2836 - Connects to the local IPv6 loopback on the port 2836

6.7.52. func (HandleType) GetTagged

func (h HandleType) GetTagged(tag string) ([]string, error)

GetTagged : Retrieves all entries that have the specified tag.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The tag must exist.
The complexity of this function is constant.

6.7.53. func (HandleType) GetTags

func (h HandleType) GetTags(entryAlias string) ([]string, error)

GetTags : Retrieves all the tags of an entry.

Tagging an entry enables you to search for entries based on their tags. Tags scale across nodes.
The entry must exist.

6.7.54. func (HandleType) Integer

func (h HandleType) Integer(alias string) IntegerEntry

Integer : Create an integer entry object

6.7.55. func (HandleType) Node

func (h HandleType) Node(uri string) *Node

Node : Create a node object

6.7.56. func (HandleType) Open

func (h HandleType) Open(protocol Protocol) error

Open : Creates a handle.

No connection will be established.
Not needed if you created your handle with NewHandle.

6.7.57. func (HandleType) Query

func (h HandleType) Query() *Query

Query : Create a query object to execute

6.7.58. func (HandleType) QueryExp

func (h HandleType) QueryExp(query string) *QueryExp

QueryExp : Create an experimental query object to execute

6.7.59. func (HandleType) Release

func (h HandleType) Release(buffer unsafe.Pointer)

Release : Releases an API-allocated buffer.

Failure to properly call this function may result in excessive memory usage.
Most operations that return a content (e.g. batch operations, qdb_blob_get, qdb_blob_get_and_update, qdb_blob_compare_and_swap...)
will allocate a buffer for the content and will not release the allocated buffer until you either call this function or close the handle.

The function will be able to release any kind of buffer allocated by a quasardb API call, whether it’s a single buffer, an array or an array of buffers.

6.7.60. func (HandleType) SetCompression

func (h HandleType) SetCompression(compressionLevel Compression) error

SetCompression : Set the compression level for all future messages emitted by the specified handle.

Regardless of this parameter, the API will be able to read whatever compression the server uses.

6.7.61. func (HandleType) SetEncryption

func (h HandleType) SetEncryption(encryption Encryption) error

SetEncryption : Creates a handle.

No connection will be established.
Not needed if you created your handle with NewHandle.

6.7.62. func (HandleType) SetMaxCardinality

func (h HandleType) SetMaxCardinality(maxCardinality uint) error

SetMaxCardinality : Sets the maximum allowed cardinality of a quasardb query.

The default value is 10,007. The minimum allowed values is 100.

6.7.63. func (HandleType) SetTimeout

func (h HandleType) SetTimeout(timeout time.Duration) error

SetTimeout : Sets the timeout of all network operations.

The lower the timeout, the higher the risk of having timeout errors.
Keep in mind that the server-side timeout might be shorter.

6.7.64. func (HandleType) Timeseries

func (h HandleType) Timeseries(alias string) TimeseriesEntry

Timeseries : Create a timeseries entry object

6.7.65. func (HandleType) TsBatch

func (h HandleType) TsBatch(cols ...TsBatchColumnInfo) (*TsBatch, error)

TsBatch : create a batch object for the specified columns

6.7.66. type IntegerEntry

type IntegerEntry struct {
    Entry
}

IntegerEntry : int data type

6.7.67. func (IntegerEntry) Add

func (entry IntegerEntry) Add(added int64) (int64, error)

Add : Atomically increases or decreases a signed 64-bit integer.

The specified entry will be atomically increased (or decreased) according to the given addend value:
    To increase the value, specify a positive added
    To decrease the value, specify a negative added

The function return the result of the operation.
The entry must already exist.

6.7.68. func (IntegerEntry) Get

func (entry IntegerEntry) Get() (int64, error)

Get : Atomically retrieves the value of a signed 64-bit integer.

Atomically retrieves the value of an existing 64-bit integer.

6.7.69. func (IntegerEntry) Put

func (entry IntegerEntry) Put(content int64, expiry time.Time) error

Put : Creates a new signed 64-bit integer.

Atomically creates an entry of the given alias and sets it to a cross-platform signed 64-bit integer.
If the entry already exists, the function returns an error.

You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.
If you want to create or update an entry use Update.

The value will be correctly translated independently of the endianness of the client’s platform.

6.7.70. func (*IntegerEntry) Update

func (entry *IntegerEntry) Update(newContent int64, expiry time.Time) error

Update : Creates or updates a signed 64-bit integer.

Atomically updates an entry of the given alias to the provided value.
If the entry doesn’t exist, it will be created.

You can specify an expiry time or use NeverExpires if you don’t want the entry to expire.

6.7.71. type Metadata

type Metadata struct {
    Ref              RefID
    Type             EntryType
    Size             uint64
    ModificationTime time.Time
    ExpiryTime       time.Time
}

Metadata : A structure representing the metadata of an entry in the database.

6.7.72. type Node

type Node struct {
    HandleType
}

Node : a structure giving access to various

informations or actions on a node

6.7.73. func (Node) Config

func (n Node) Config() (NodeConfig, error)

Config :

Returns the configuration of a node.

The configuration is a JSON object, as described in the documentation.

6.7.74. func (Node) RawConfig

func (n Node) RawConfig() ([]byte, error)

RawConfig :

Returns the configuration of a node.

The configuration is a JSON object as a byte array, as described in the documentation.

6.7.75. func (Node) RawStatus

func (n Node) RawStatus() ([]byte, error)

RawStatus :

Returns the status of a node.

The status is a JSON object as a byte array and contains current information of the node state, as described in the documentation.

6.7.76. func (Node) RawTopology

func (n Node) RawTopology() ([]byte, error)

RawTopology :

Returns the topology of a node.

The topology is a JSON object as a byte array containing the node address, and the addresses of its successor and predecessor.

6.7.77. func (Node) Status

func (n Node) Status() (NodeStatus, error)

Status :

Returns the status of a node.

The status is a JSON object and contains current information of the node state, as described in the documentation.

6.7.78. func (Node) Topology

func (n Node) Topology() (NodeTopology, error)

Topology :

Returns the topology of a node.

The topology is a JSON object containing the node address, and the addresses of its successor and predecessor.

6.7.79. type NodeConfig

type NodeConfig struct {
    Local struct {
        Depot struct {
            SyncEveryWrite         bool   `json:"sync_every_write"`
            Root                   string `json:"root"`
            HeliumURL              string `json:"helium_url"`
            MaxBytes               int64  `json:"max_bytes"`
            StorageWarningLevel    int    `json:"storage_warning_level"`
            StorageWarningInterval int    `json:"storage_warning_interval"`
            DisableWal             bool   `json:"disable_wal"`
            DirectRead             bool   `json:"direct_read"`
            DirectWrite            bool   `json:"direct_write"`
            MaxTotalWalSize        int    `json:"max_total_wal_size"`
            MetadataMemBudget      int    `json:"metadata_mem_budget"`
            DataCache              int    `json:"data_cache"`
            Threads                int    `json:"threads"`
            HiThreads              int    `json:"hi_threads"`
            MaxOpenFiles           int    `json:"max_open_files"`
        } `json:"depot"`
        User struct {
            LicenseFile string `json:"license_file"`
            LicenseKey  string `json:"license_key"`
            Daemon      bool   `json:"daemon"`
        } `json:"user"`
        Limiter struct {
            MaxResidentEntries int   `json:"max_resident_entries"`
            MaxBytes           int64 `json:"max_bytes"`
            MaxTrimQueueLength int   `json:"max_trim_queue_length"`
        } `json:"limiter"`
        Logger struct {
            LogLevel      int    `json:"log_level"`
            FlushInterval int    `json:"flush_interval"`
            LogDirectory  string `json:"log_directory"`
            LogToConsole  bool   `json:"log_to_console"`
            LogToSyslog   bool   `json:"log_to_syslog"`
        } `json:"logger"`
        Network struct {
            ServerSessions  int    `json:"server_sessions"`
            PartitionsCount int    `json:"partitions_count"`
            IdleTimeout     int    `json:"idle_timeout"`
            ClientTimeout   int    `json:"client_timeout"`
            ListenOn        string `json:"listen_on"`
        } `json:"network"`
        Chord struct {
            NodeID                   string   `json:"node_id"`
            NoStabilization          bool     `json:"no_stabilization"`
            BootstrappingPeers       []string `json:"bootstrapping_peers"`
            MinStabilizationInterval int      `json:"min_stabilization_interval"`
            MaxStabilizationInterval int      `json:"max_stabilization_interval"`
        } `json:"chord"`
    } `json:"local"`
    Global struct {
        Cluster struct {
            Transient              bool `json:"transient"`
            History                bool `json:"history"`
            ReplicationFactor      int  `json:"replication_factor"`
            MaxVersions            int  `json:"max_versions"`
            MaxTransactionDuration int  `json:"max_transaction_duration"`
        } `json:"cluster"`
        Security struct {
            EnableStop         bool   `json:"enable_stop"`
            EnablePurgeAll     bool   `json:"enable_purge_all"`
            Enabled            bool   `json:"enabled"`
            EncryptTraffic     bool   `json:"encrypt_traffic"`
            ClusterPrivateFile string `json:"cluster_private_file"`
            UserList           string `json:"user_list"`
        } `json:"security"`
    } `json:"global"`
}

NodeConfig : a json representation object containing the configuration of a node

6.7.80. type NodeLocation

type NodeLocation struct {
    Address string
    Port    int16
}

NodeLocation : A structure representing the address of a quasardb node.

6.7.81. type NodeStatus

type NodeStatus struct {
    Memory struct {
        VM struct {
            Used  int64 `json:"used"`
            Total int64 `json:"total"`
        } `json:"vm"`
        Physmem struct {
            Used  int64 `json:"used"`
            Total int64 `json:"total"`
        } `json:"physmem"`
    } `json:"memory"`
    CPUTimes struct {
        Idle   int64 `json:"idle"`
        System int   `json:"system"`
        User   int64 `json:"user"`
    } `json:"cpu_times"`
    DiskUsage struct {
        Free  int64 `json:"free"`
        Total int64 `json:"total"`
    } `json:"disk_usage"`
    Network struct {
        ListeningEndpoint string `json:"listening_endpoint"`
        Partitions        struct {
            Count             int   `json:"count"`
            MaxSessions       int   `json:"max_sessions"`
            AvailableSessions []int `json:"available_sessions"`
        } `json:"partitions"`
    } `json:"network"`
    NodeID              string    `json:"node_id"`
    OperatingSystem     string    `json:"operating_system"`
    HardwareConcurrency int       `json:"hardware_concurrency"`
    Timestamp           time.Time `json:"timestamp"`
    Startup             time.Time `json:"startup"`
    EngineVersion       string    `json:"engine_version"`
    EngineBuildDate     time.Time `json:"engine_build_date"`
    Entries             struct {
        Resident struct {
            Count int `json:"count"`
            Size  int `json:"size"`
        } `json:"resident"`
        Persisted struct {
            Count int `json:"count"`
            Size  int `json:"size"`
        } `json:"persisted"`
    } `json:"entries"`
    Operations struct {
        Get struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        } `json:"get"`
        GetAndRemove struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        } `json:"get_and_remove"`
        Put struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        } `json:"put"`
        Update struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        } `json:"update"`
        GetAndUpdate struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        } `json:"get_and_update"`
        CompareAndSwap struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        } `json:"compare_and_swap"`
        Remove struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        } `json:"remove"`
        RemoveIf struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        } `json:"remove_if"`
        PurgeAll struct {
            Count     int `json:"count"`
            Successes int `json:"successes"`
            Failures  int `json:"failures"`
            Pageins   int `json:"pageins"`
            Evictions int `json:"evictions"`
            InBytes   int `json:"in_bytes"`
            OutBytes  int `json:"out_bytes"`
        } `json:"purge_all"`
    } `json:"operations"`
    Overall struct {
        Count     int `json:"count"`
        Successes int `json:"successes"`
        Failures  int `json:"failures"`
        Pageins   int `json:"pageins"`
        Evictions int `json:"evictions"`
        InBytes   int `json:"in_bytes"`
        OutBytes  int `json:"out_bytes"`
    } `json:"overall"`
}

NodeStatus : a json representation object containing the status of a node

6.7.82. type NodeTopology

type NodeTopology struct {
    Predecessor struct {
        Reference string `json:"reference"`
        Endpoint  string `json:"endpoint"`
    } `json:"predecessor"`
    Center struct {
        Reference string `json:"reference"`
        Endpoint  string `json:"endpoint"`
    } `json:"center"`
    Successor struct {
        Reference string `json:"reference"`
        Endpoint  string `json:"endpoint"`
    } `json:"successor"`
}

6.7.83. type Protocol

type Protocol C.qdb_protocol_t

Protocol : A network protocol.

const (
    ProtocolTCP Protocol = C.qdb_p_tcp
)

Protocol values:

ProtocolTCP : Uses TCP/IP to communicate with the cluster. This is currently the only supported network protocol.

6.7.84. type Query

type Query struct {
    HandleType
}

Query : a building type to execute a query Retrieves all entries’ aliases that match the specified query. For the complete grammar, please refer to the documentation. Queries are transactional. The complexity of this function is dependent on the complexity of the query.

6.7.85. func (Query) Execute

func (q Query) Execute() ([]string, error)

Execute : Execute the current query

6.7.86. func (Query) ExecuteString

func (q Query) ExecuteString(query string) ([]string, error)

ExecuteString : Execute a string query immediately

6.7.87. func (*Query) NotTag

func (q *Query) NotTag(t string) *Query

NotTag : Adds a tag to exclude from the current query results

6.7.88. func (*Query) Tag

func (q *Query) Tag(t string) *Query

Tag : Adds a tag to include into the current query results

6.7.89. func (*Query) Type

func (q *Query) Type(t string) *Query

Type : Restrict the query results to a particular type

6.7.90. type QueryExp

type QueryExp struct {
    HandleType
}

QueryExp : Experimental query

6.7.91. func (QueryExp) Execute

func (q QueryExp) Execute() (*QueryResult, error)

Execute : execute a query

6.7.92. type QueryPointResult

type QueryPointResult struct {
}

QueryPointResult : a query result point

6.7.93. func (QueryPointResult) Type

func (r QueryPointResult) Type() QueryResultValueType

Type : gives the type of the query point result

6.7.94. func (QueryPointResult) Value

func (r QueryPointResult) Value() interface{}

Value : gives the interface{} value of the query point result

6.7.95. type QueryResult

type QueryResult struct {
}

QueryResult : a query result

6.7.96. func (QueryResult) ScannedRows

func (r QueryResult) ScannedRows() int64

ScannedRows : number of rows scanned

The actual number of scanned rows may be greater

6.7.97. func (QueryResult) Tables

func (r QueryResult) Tables() QueryTables

Tables : get tables of a query result

6.7.98. func (QueryResult) TablesCount

func (r QueryResult) TablesCount() int64

TablesCount : get the number of tables of a query result

6.7.99. type QueryResultValueType

type QueryResultValueType int64

QueryResultValueType : an enum of possible query point result types

const (
    QueryResultNone      QueryResultValueType = C.qdb_query_result_none
    QueryResultDouble    QueryResultValueType = C.qdb_query_result_double
    QueryResultBlob      QueryResultValueType = C.qdb_query_result_blob
    QueryResultInt64     QueryResultValueType = C.qdb_query_result_int64
    QueryResultTimestamp QueryResultValueType = C.qdb_query_result_timestamp
    QueryResultCount     QueryResultValueType = C.qdb_query_result_count
)

QueryResultNone : query result value none QueryResultDouble : query result value double QueryResultBlob : query result value blob QueryResultInt64 : query result value int64 QueryResultTimestamp : query result value timestamp QueryResultCount : query result value count

6.7.100. type QueryRow

type QueryRow []C.qdb_point_result_t

QueryRow : query result table row

6.7.101. type QueryRows

type QueryRows []*C.qdb_point_result_t

QueryRows : query result table rows

6.7.102. type QueryTable

type QueryTable C.qdb_table_result_t

QueryTable : query result table

6.7.103. type QueryTables

type QueryTables []C.qdb_table_result_t

QueryTables : query result tables

6.7.104. type RefID

type RefID C.qdb_id_t

RefID : Unique identifier

6.7.105. type TimeseriesEntry

type TimeseriesEntry struct {
    Entry
}

TimeseriesEntry : timeseries double entry data type

6.7.106. func (TimeseriesEntry) BlobColumn

func (entry TimeseriesEntry) BlobColumn(columnName string) TsBlobColumn

BlobColumn : create a column object

6.7.107. func (TimeseriesEntry) Bulk

func (entry TimeseriesEntry) Bulk(cols ...TsColumnInfo) (*TsBulk, error)

Bulk : create a bulk object for the specified columns

If no columns are specified it gets the server side registered columns

6.7.108. func (TimeseriesEntry) Columns

func (entry TimeseriesEntry) Columns() ([]TsDoubleColumn, []TsBlobColumn, []TsInt64Column, []TsTimestampColumn, error)

Columns : return the current columns

6.7.109. func (TimeseriesEntry) ColumnsInfo

func (entry TimeseriesEntry) ColumnsInfo() ([]TsColumnInfo, error)

ColumnsInfo : return the current columns information

6.7.110. func (TimeseriesEntry) Create

func (entry TimeseriesEntry) Create(shardSize time.Duration, cols ...TsColumnInfo) error

Create : create a new timeseries

First parameter is the duration limit to organize a shard
Ex: shardSize := 24 * time.Hour

6.7.111. func (TimeseriesEntry) DoubleColumn

func (entry TimeseriesEntry) DoubleColumn(columnName string) TsDoubleColumn

DoubleColumn : create a column object

6.7.112. func (TimeseriesEntry) InsertColumns

func (entry TimeseriesEntry) InsertColumns(cols ...TsColumnInfo) error

InsertColumns : insert columns in a existing timeseries

6.7.113. func (TimeseriesEntry) Int64Column

func (entry TimeseriesEntry) Int64Column(columnName string) TsInt64Column

Int64Column : create a column object

6.7.114. func (TimeseriesEntry) TimestampColumn

func (entry TimeseriesEntry) TimestampColumn(columnName string) TsTimestampColumn

TimestampColumn : create a column object

6.7.115. type TsAggregationType

type TsAggregationType C.qdb_ts_aggregation_type_t

TsAggregationType typedef of C.qdb_ts_aggregation_type

const (
    AggFirst              TsAggregationType = C.qdb_agg_first
    AggLast               TsAggregationType = C.qdb_agg_last
    AggMin                TsAggregationType = C.qdb_agg_min
    AggMax                TsAggregationType = C.qdb_agg_max
    AggArithmeticMean     TsAggregationType = C.qdb_agg_arithmetic_mean
    AggHarmonicMean       TsAggregationType = C.qdb_agg_harmonic_mean
    AggGeometricMean      TsAggregationType = C.qdb_agg_geometric_mean
    AggQuadraticMean      TsAggregationType = C.qdb_agg_quadratic_mean
    AggCount              TsAggregationType = C.qdb_agg_count
    AggSum                TsAggregationType = C.qdb_agg_sum
    AggSumOfSquares       TsAggregationType = C.qdb_agg_sum_of_squares
    AggSpread             TsAggregationType = C.qdb_agg_spread
    AggSampleVariance     TsAggregationType = C.qdb_agg_sample_variance
    AggSampleStddev       TsAggregationType = C.qdb_agg_sample_stddev
    AggPopulationVariance TsAggregationType = C.qdb_agg_population_variance
    AggPopulationStddev   TsAggregationType = C.qdb_agg_population_stddev
    AggAbsMin             TsAggregationType = C.qdb_agg_abs_min
    AggAbsMax             TsAggregationType = C.qdb_agg_abs_max
    AggProduct            TsAggregationType = C.qdb_agg_product
    AggSkewness           TsAggregationType = C.qdb_agg_skewness
    AggKurtosis           TsAggregationType = C.qdb_agg_kurtosis
)

Each type gets its value between the begin and end timestamps of aggregation

6.7.116. type TsBatch

type TsBatch struct {
}

TsBatch : A structure that permits to append data to a timeseries

6.7.117. func (*TsBatch) AddBlob

func (t *TsBatch) AddBlob(timeseries, column string, points ...TsBlobPoint) error

AddBlob : Add one or more blob point to the batch disregarding the order of initialization

6.7.118. func (*TsBatch) AddDouble

func (t *TsBatch) AddDouble(timeseries, column string, points ...TsDoublePoint) error

AddDouble : Add one or more double point to the batch disregarding the order of initialization

6.7.119. func (*TsBatch) AddInt64

func (t *TsBatch) AddInt64(timeseries, column string, points ...TsInt64Point) error

AddInt64 : Add one or more int64 to the batch disregarding the order of initialization

6.7.120. func (*TsBatch) AddTimestamp

func (t *TsBatch) AddTimestamp(timeseries, column string, points ...TsTimestampPoint) error

AddTimestamp : Add one or more timestamp to the batch disregarding the order of initialization

6.7.121. func (*TsBatch) Push

func (t *TsBatch) Push() error

Push : Push the inserted data

6.7.122. func (*TsBatch) Release

func (t *TsBatch) Release()

Release : release the memory of the batch table

6.7.123. func (*TsBatch) RowFinalize

func (t *TsBatch) RowFinalize(timestamp time.Time) error

RowFinalize : Finalize a row

6.7.124. func (*TsBatch) RowSetBlob

func (t *TsBatch) RowSetBlob(content []byte) error

RowSetBlob : Add a blob to current row

6.7.125. func (*TsBatch) RowSetBlobNoCopy

func (t *TsBatch) RowSetBlobNoCopy(content []byte) error

RowSetBlobNoCopy : Add a blob to current row without copying it

6.7.126. func (*TsBatch) RowSetDouble

func (t *TsBatch) RowSetDouble(value float64) error

RowSetDouble : Add a double to current row

6.7.127. func (*TsBatch) RowSetInt64

func (t *TsBatch) RowSetInt64(value int64) error

RowSetInt64 : Add an int64 to current row

6.7.128. func (*TsBatch) RowSetTimestamp

func (t *TsBatch) RowSetTimestamp(value time.Time) error

RowSetTimestamp : Add a timestamp to current row

6.7.129. func (*TsBatch) RowSkipColumn

func (t *TsBatch) RowSkipColumn() error

RowSkipColumn : Skip this column in the current row

6.7.130. type TsBatchColumnInfo

type TsBatchColumnInfo struct {
    Timeseries       string
    Column           string
    ElementCountHint int64
}

TsBatchColumnInfo : Represents one column in a timeseries Preallocate the underlying structure with the ElementCountHint

6.7.131. func NewTsBatchColumnInfo

func NewTsBatchColumnInfo(timeseries string, column string, hint int64) TsBatchColumnInfo

NewTsBatchColumnInfo : Creates a new TsBatchColumnInfo

6.7.132. type TsBlobAggregation

type TsBlobAggregation struct {
}

TsBlobAggregation : Aggregation of double type

6.7.133. func NewBlobAggregation

func NewBlobAggregation(kind TsAggregationType, rng TsRange) *TsBlobAggregation

NewBlobAggregation : Create new timeseries blob aggregation

6.7.134. func (TsBlobAggregation) Count

func (t TsBlobAggregation) Count() int64

Count : returns the number of points aggregated into the result

6.7.135. func (TsBlobAggregation) Range

func (t TsBlobAggregation) Range() TsRange

Range : returns the range of the aggregation

6.7.136. func (TsBlobAggregation) Result

func (t TsBlobAggregation) Result() TsBlobPoint

Result : result of the aggregation

6.7.137. func (TsBlobAggregation) Type

func (t TsBlobAggregation) Type() TsAggregationType

Type : returns the type of the aggregation

6.7.138. type TsBlobColumn

type TsBlobColumn struct {
}

TsBlobColumn : a time series blob column

6.7.139. func (TsBlobColumn) Aggregate

func (column TsBlobColumn) Aggregate(aggs ...*TsBlobAggregation) ([]TsBlobAggregation, error)

Aggregate : Aggregate a sub-part of the time series.

It is an error to call this function on a non existing time-series.

6.7.140. func (TsBlobColumn) EraseRanges

func (column TsBlobColumn) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

6.7.141. func (TsBlobColumn) GetRanges

func (column TsBlobColumn) GetRanges(rgs ...TsRange) ([]TsBlobPoint, error)

GetRanges : Retrieves blobs in the specified range of the time series column.

It is an error to call this function on a non existing time-series.

6.7.142. func (TsBlobColumn) Insert

func (column TsBlobColumn) Insert(points ...TsBlobPoint) error

Insert blob points into a timeseries

6.7.143. type TsBlobPoint

type TsBlobPoint struct {
}

TsBlobPoint : timestamped data

6.7.144. func NewTsBlobPoint

func NewTsBlobPoint(timestamp time.Time, value []byte) TsBlobPoint

NewTsBlobPoint : Create new timeseries double point

6.7.145. func (TsBlobPoint) Content

func (t TsBlobPoint) Content() []byte

Content : return data point content

6.7.146. func (TsBlobPoint) Timestamp

func (t TsBlobPoint) Timestamp() time.Time

Timestamp : return data point timestamp

6.7.147. type TsBulk

type TsBulk struct {
}

TsBulk : A structure that permits to append data to a timeseries

6.7.148. func (*TsBulk) Append

func (t *TsBulk) Append() error

Append : Adds the append to the list to be pushed

6.7.149. func (*TsBulk) Blob

func (t *TsBulk) Blob(content []byte) *TsBulk

Blob : adds a blob in row transaction

6.7.150. func (*TsBulk) Double

func (t *TsBulk) Double(value float64) *TsBulk

Double : adds a double in row transaction

6.7.151. func (*TsBulk) GetBlob

func (t *TsBulk) GetBlob() ([]byte, error)

GetBlob : gets a blob in row

6.7.152. func (*TsBulk) GetDouble

func (t *TsBulk) GetDouble() (float64, error)

GetDouble : gets a double in row

6.7.153. func (*TsBulk) GetInt64

func (t *TsBulk) GetInt64() (int64, error)

GetInt64 : gets an int64 in row

6.7.154. func (*TsBulk) GetRanges

func (t *TsBulk) GetRanges(rgs ...TsRange) error

GetRanges : create a range bulk query

6.7.155. func (*TsBulk) GetTimestamp

func (t *TsBulk) GetTimestamp() (time.Time, error)

GetTimestamp : gets a timestamp in row

6.7.156. func (*TsBulk) Ignore

func (t *TsBulk) Ignore() *TsBulk

Ignore : ignores this column in a row transaction

6.7.157. func (*TsBulk) Int64

func (t *TsBulk) Int64(value int64) *TsBulk

Int64 : adds an int64 in row transaction

6.7.158. func (*TsBulk) NextRow

func (t *TsBulk) NextRow() (time.Time, error)

NextRow : advance to the next row, or the first one if not already used

6.7.159. func (*TsBulk) Push

func (t *TsBulk) Push() (int, error)

Push : push the list of appended rows returns the number of rows added

6.7.160. func (*TsBulk) Release

func (t *TsBulk) Release()

Release : release the memory of the local table

6.7.161. func (*TsBulk) Row

func (t *TsBulk) Row(timestamp time.Time) *TsBulk

Row : initialize a row append

6.7.162. func (TsBulk) RowCount

func (t TsBulk) RowCount() int

RowCount : returns the number of rows to be append

6.7.163. func (*TsBulk) Timestamp

func (t *TsBulk) Timestamp(value time.Time) *TsBulk

Timestamp : adds a timestamp in row transaction

6.7.164. type TsColumnInfo

type TsColumnInfo struct {
}

TsColumnInfo : column information in timeseries

6.7.165. func NewTsColumnInfo

func NewTsColumnInfo(columnName string, columnType TsColumnType) TsColumnInfo

NewTsColumnInfo : create a column info structure

6.7.166. func (TsColumnInfo) Name

func (t TsColumnInfo) Name() string

Name : return column name

6.7.167. func (TsColumnInfo) Type

func (t TsColumnInfo) Type() TsColumnType

Type : return column type

6.7.168. type TsColumnType

type TsColumnType C.qdb_ts_column_type_t

TsColumnType : Timeseries column types

const (
    TsColumnUninitialized TsColumnType = C.qdb_ts_column_uninitialized
    TsColumnDouble        TsColumnType = C.qdb_ts_column_double
    TsColumnBlob          TsColumnType = C.qdb_ts_column_blob
    TsColumnInt64         TsColumnType = C.qdb_ts_column_int64
    TsColumnTimestamp     TsColumnType = C.qdb_ts_column_timestamp
)

Values

TsColumnDouble : column is a double point
TsColumnBlob : column is a blob point
TsColumnInt64 : column is a int64 point
TsColumnTimestamp : column is a timestamp point

6.7.169. type TsDoubleAggregation

type TsDoubleAggregation struct {
}

TsDoubleAggregation : Aggregation of double type

6.7.170. func NewDoubleAggregation

func NewDoubleAggregation(kind TsAggregationType, rng TsRange) *TsDoubleAggregation

NewDoubleAggregation : Create new timeseries double aggregation

6.7.171. func (TsDoubleAggregation) Count

func (t TsDoubleAggregation) Count() int64

Count : returns the number of points aggregated into the result

6.7.172. func (TsDoubleAggregation) Range

func (t TsDoubleAggregation) Range() TsRange

Range : returns the range of the aggregation

6.7.173. func (TsDoubleAggregation) Result

func (t TsDoubleAggregation) Result() TsDoublePoint

Result : result of the aggregation

6.7.174. func (TsDoubleAggregation) Type

func (t TsDoubleAggregation) Type() TsAggregationType

Type : returns the type of the aggregation

6.7.175. type TsDoubleColumn

type TsDoubleColumn struct {
}

TsDoubleColumn : a time series double column

6.7.176. func (TsDoubleColumn) Aggregate

func (column TsDoubleColumn) Aggregate(aggs ...*TsDoubleAggregation) ([]TsDoubleAggregation, error)

Aggregate : Aggregate a sub-part of a timeseries from the specified aggregations.

It is an error to call this function on a non existing time-series.

6.7.177. func (TsDoubleColumn) EraseRanges

func (column TsDoubleColumn) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

6.7.178. func (TsDoubleColumn) GetRanges

func (column TsDoubleColumn) GetRanges(rgs ...TsRange) ([]TsDoublePoint, error)

GetRanges : Retrieves blobs in the specified range of the time series column.

It is an error to call this function on a non existing time-series.

6.7.179. func (TsDoubleColumn) Insert

func (column TsDoubleColumn) Insert(points ...TsDoublePoint) error

Insert double points into a timeseries

6.7.180. type TsDoublePoint

type TsDoublePoint struct {
}

TsDoublePoint : timestamped double data point

6.7.181. func NewTsDoublePoint

func NewTsDoublePoint(timestamp time.Time, value float64) TsDoublePoint

NewTsDoublePoint : Create new timeseries double point

6.7.182. func (TsDoublePoint) Content

func (t TsDoublePoint) Content() float64

Content : return data point content

6.7.183. func (TsDoublePoint) Timestamp

func (t TsDoublePoint) Timestamp() time.Time

Timestamp : return data point timestamp

6.7.184. type TsInt64Aggregation

type TsInt64Aggregation struct {
}

TsInt64Aggregation : Aggregation of int64 type

6.7.185. func NewInt64Aggregation

func NewInt64Aggregation(kind TsAggregationType, rng TsRange) *TsInt64Aggregation

NewInt64Aggregation : Create new timeseries int64 aggregation

6.7.186. func (TsInt64Aggregation) Count

func (t TsInt64Aggregation) Count() int64

Count : returns the number of points aggregated into the result

6.7.187. func (TsInt64Aggregation) Range

func (t TsInt64Aggregation) Range() TsRange

Range : returns the range of the aggregation

6.7.188. func (TsInt64Aggregation) Result

func (t TsInt64Aggregation) Result() TsInt64Point

Result : result of the aggregation

6.7.189. func (TsInt64Aggregation) Type

func (t TsInt64Aggregation) Type() TsAggregationType

Type : returns the type of the aggregation

6.7.190. type TsInt64Column

type TsInt64Column struct {
}

TsInt64Column : a time series int64 column

6.7.191. func (TsInt64Column) EraseRanges

func (column TsInt64Column) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

6.7.192. func (TsInt64Column) GetRanges

func (column TsInt64Column) GetRanges(rgs ...TsRange) ([]TsInt64Point, error)

GetRanges : Retrieves int64s in the specified range of the time series column.

It is an error to call this function on a non existing time-series.

6.7.193. func (TsInt64Column) Insert

func (column TsInt64Column) Insert(points ...TsInt64Point) error

Insert int64 points into a timeseries

6.7.194. type TsInt64Point

type TsInt64Point struct {
}

TsInt64Point : timestamped int64 data point

6.7.195. func NewTsInt64Point

func NewTsInt64Point(timestamp time.Time, value int64) TsInt64Point

NewTsInt64Point : Create new timeseries int64 point

6.7.196. func (TsInt64Point) Content

func (t TsInt64Point) Content() int64

Content : return data point content

6.7.197. func (TsInt64Point) Timestamp

func (t TsInt64Point) Timestamp() time.Time

Timestamp : return data point timestamp

6.7.198. type TsRange

type TsRange struct {
}

TsRange : timeseries range with begin and end timestamp

6.7.199. func NewRange

func NewRange(begin, end time.Time) TsRange

NewRange : creates a time range

6.7.200. func (TsRange) Begin

func (t TsRange) Begin() time.Time

Begin : returns the start of the time range

6.7.201. func (TsRange) End

func (t TsRange) End() time.Time

End : returns the end of the time range

6.7.202. type TsTimestampAggregation

type TsTimestampAggregation struct {
}

TsTimestampAggregation : Aggregation of timestamp type

6.7.203. func NewTimestampAggregation

func NewTimestampAggregation(kind TsAggregationType, rng TsRange) *TsTimestampAggregation

NewTimestampAggregation : Create new timeseries timestamp aggregation

6.7.204. func (TsTimestampAggregation) Count

func (t TsTimestampAggregation) Count() int64

Count : returns the number of points aggregated into the result

6.7.205. func (TsTimestampAggregation) Range

func (t TsTimestampAggregation) Range() TsRange

Range : returns the range of the aggregation

6.7.206. func (TsTimestampAggregation) Result

func (t TsTimestampAggregation) Result() TsTimestampPoint

Result : result of the aggregation

6.7.207. func (TsTimestampAggregation) Type

func (t TsTimestampAggregation) Type() TsAggregationType

Type : returns the type of the aggregation

6.7.208. type TsTimestampColumn

type TsTimestampColumn struct {
}

TsTimestampColumn : a time series timestamp column

6.7.209. func (TsTimestampColumn) EraseRanges

func (column TsTimestampColumn) EraseRanges(rgs ...TsRange) (uint64, error)

EraseRanges : erase all points in the specified ranges

6.7.210. func (TsTimestampColumn) GetRanges

func (column TsTimestampColumn) GetRanges(rgs ...TsRange) ([]TsTimestampPoint, error)

GetRanges : Retrieves timestamps in the specified range of the time series column.

It is an error to call this function on a non existing time-series.

6.7.211. func (TsTimestampColumn) Insert

func (column TsTimestampColumn) Insert(points ...TsTimestampPoint) error

Insert timestamp points into a timeseries

6.7.212. type TsTimestampPoint

type TsTimestampPoint struct {
}

TsTimestampPoint : timestamped timestamp data point

6.7.213. func NewTsTimestampPoint

func NewTsTimestampPoint(timestamp time.Time, value time.Time) TsTimestampPoint

NewTsTimestampPoint : Create new timeseries timestamp point

6.7.214. func (TsTimestampPoint) Content

func (t TsTimestampPoint) Content() time.Time

Content : return data point content

6.7.215. func (TsTimestampPoint) Timestamp

func (t TsTimestampPoint) Timestamp() time.Time

Timestamp : return data point timestamp

6.7.216. Examples

package qdb

import (
	"fmt"
	"time"
)

func ExampleHandleType() {
	var h HandleType
	h.Open(ProtocolTCP)
	fmt.Printf("API build: %s\n", h.APIVersion())
	// Output: API build: 2.6.0master
}

func ExampleEntry_Alias() {
	h := MustSetupHandle(clusterURI, 120*time.Second)
	defer h.Close()

	blob1 := h.Blob("BLOB_1")
	blob1.Put([]byte("blob 1 content"), NeverExpires())
	defer blob1.Remove()
	blob2 := h.Blob("BLOB_2")
	blob2.Put([]byte("blob 2 content"), NeverExpires())
	defer blob2.Remove()

	fmt.Println("Alias blob 1:", blob1.Alias())
	fmt.Println("Alias blob 2:", blob2.Alias())

	tags1 := []string{"tag blob 1", "tag both blob"}
	blob1.AttachTags(tags1)
	defer blob1.DetachTags(tags1)
	tags2 := []string{"tag blob 2", "tag both blob"}
	blob2.AttachTags(tags2)
	defer blob2.DetachTags(tags2)

	resultTagBlob1, _ := blob1.GetTagged("tag blob 1")
	fmt.Println("Tagged with 'tag blob 1':", resultTagBlob1)
	resultTagBlob2, _ := blob1.GetTagged("tag blob 2")
	fmt.Println("Tagged with 'tag blob 2':", resultTagBlob2)
	resultTagBoth, _ := blob1.GetTagged("tag both blob")
	fmt.Println("Tagged with 'tag both blob':", resultTagBoth)

	// Output: Alias blob 1: BLOB_1
	// Alias blob 2: BLOB_2
	// Tagged with 'tag blob 1': [BLOB_1]
	// Tagged with 'tag blob 2': [BLOB_2]
	// Tagged with 'tag both blob': [BLOB_2 BLOB_1]

}

func ExampleBlobEntry() {
	h := MustSetupHandle(clusterURI, 120*time.Second)
	defer h.Close()

	alias := "BlobAlias"
	blob := h.Blob(alias)
	defer blob.Remove()

	content := []byte("content")
	blob.Put(content, NeverExpires())

	obtainedContent, _ := blob.Get()
	fmt.Println("Get content:", string(obtainedContent))

	updateContent := []byte("updated content")
	blob.Update(updateContent, PreserveExpiration())

	obtainedContent, _ = blob.Get()
	fmt.Println("Get updated content:", string(obtainedContent))

	newContent := []byte("new content")
	previousContent, _ := blob.GetAndUpdate(newContent, PreserveExpiration())
	fmt.Println("Previous content:", string(previousContent))

	obtainedContent, _ = blob.Get()
	fmt.Println("Get new content:", string(obtainedContent))

	// Output:
	// Get content: content
	// Get updated content: updated content
	// Previous content: updated content
	// Get new content: new content
}

func ExampleIntegerEntry() {
	h := MustSetupHandle(clusterURI, 120*time.Second)
	defer h.Close()

	alias := "IntAlias"
	integer := h.Integer(alias)

	integer.Put(int64(3), NeverExpires())
	defer integer.Remove()

	obtainedContent, _ := integer.Get()
	fmt.Println("Get content:", obtainedContent)

	newContent := int64(87)
	integer.Update(newContent, NeverExpires())

	obtainedContent, _ = integer.Get()
	fmt.Println("Get updated content:", obtainedContent)

	integer.Add(3)

	obtainedContent, _ = integer.Get()
	fmt.Println("Get added content:", obtainedContent)

	// Output:
	// Get content: 3
	// Get updated content: 87
	// Get added content: 90
}

func ExampleTimeseriesEntry() {
	h := MustSetupHandle(clusterURI, 120*time.Second)
	defer h.Close()
	timeseries := h.Timeseries("alias")

	fmt.Println("timeseries:", timeseries.Alias())
	// Output:
	// timeseries: alias
}

func ExampleTimeseriesEntry_Create() {
	h, timeseries := MustCreateTimeseries("ExampleTimeseriesEntry_Create")
	defer h.Close()

	// duration, columns...
	timeseries.Create(24*time.Hour, NewTsColumnInfo("serie_column_blob", TsColumnBlob), NewTsColumnInfo("serie_column_double", TsColumnDouble))
}

func ExampleTimeseriesEntry_Columns() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_Columns")
	defer h.Close()

	doubleColumns, blobColumns, int64Columns, timestampColumns, err := timeseries.Columns()
	if err != nil {
		// handle error
	}
	for _, col := range doubleColumns {
		fmt.Println("column:", col.Name())
		// do something like Insert, GetRanges with a double column
	}
	for _, col := range blobColumns {
		fmt.Println("column:", col.Name())
		// do something like Insert, GetRanges with a blob column
	}
	for _, col := range int64Columns {
		fmt.Println("column:", col.Name())
		// do something like Insert, GetRanges with a blob column
	}
	for _, col := range timestampColumns {
		fmt.Println("column:", col.Name())
		// do something like Insert, GetRanges with a blob column
	}
	// Output:
	// column: serie_column_double
	// column: serie_column_blob
	// column: serie_column_int64
	// column: serie_column_timestamp
}

func ExampleTimeseriesEntry_ColumnsInfo() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_ColumnsInfo")
	defer h.Close()

	columns, err := timeseries.ColumnsInfo()
	if err != nil {
		// handle error
	}
	for _, col := range columns {
		fmt.Println("column:", col.Name())
	}
	// Output:
	// column: serie_column_blob
	// column: serie_column_double
	// column: serie_column_int64
	// column: serie_column_timestamp
}

func ExampleTimeseriesEntry_InsertColumns() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_InsertColumns")
	defer h.Close()

	err := timeseries.InsertColumns(NewTsColumnInfo("serie_column_blob_2", TsColumnBlob), NewTsColumnInfo("serie_column_double_2", TsColumnDouble))
	if err != nil {
		// handle error
	}
	columns, err := timeseries.ColumnsInfo()
	if err != nil {
		// handle error
	}
	for _, col := range columns {
		fmt.Println("column:", col.Name())
	}
	// Output:
	// column: serie_column_blob
	// column: serie_column_double
	// column: serie_column_int64
	// column: serie_column_timestamp
	// column: serie_column_blob_2
	// column: serie_column_double_2
}

func ExampleTimeseriesEntry_DoubleColumn() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_DoubleColumn")
	defer h.Close()

	column := timeseries.DoubleColumn("serie_column_double")
	fmt.Println("column:", column.Name())
	// Output:
	// column: serie_column_double
}

func ExampleTsDoubleColumn_Insert() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsDoubleColumn_Insert")
	defer h.Close()

	column := timeseries.DoubleColumn("serie_column_double")

	// Insert only one point:
	column.Insert(NewTsDoublePoint(time.Now(), 3.2))

	// Insert multiple points
	doublePoints := make([]TsDoublePoint, 2)
	doublePoints[0] = NewTsDoublePoint(time.Now(), 3.2)
	doublePoints[1] = NewTsDoublePoint(time.Now(), 4.8)

	err := column.Insert(doublePoints...)
	if err != nil {
		// handle error
	}
}

func ExampleTsDoubleColumn_GetRanges() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTsDoubleColumn_GetRanges")
	defer h.Close()

	column := timeseries.DoubleColumn("serie_column_double")

	r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
	doublePoints, err := column.GetRanges(r)
	if err != nil {
		// handle error
	}
	for _, point := range doublePoints {
		fmt.Println("timestamp:", point.Timestamp(), "- value:", point.Content())
	}
	// Output:
	// timestamp: 1970-01-01 01:00:10 +0100 CET - value: 0
	// timestamp: 1970-01-01 01:00:20 +0100 CET - value: 1
	// timestamp: 1970-01-01 01:00:30 +0100 CET - value: 2
	// timestamp: 1970-01-01 01:00:40 +0100 CET - value: 3
}

func ExampleTsDoubleColumn_EraseRanges() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTsDoubleColumn_EraseRanges")
	defer h.Close()

	column := timeseries.DoubleColumn("serie_column_double")

	r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
	numberOfErasedValues, err := column.EraseRanges(r)
	if err != nil {
		// handle error
	}
	fmt.Println("Number of erased values:", numberOfErasedValues)
	// Output:
	// Number of erased values: 4
}

func ExampleTsDoubleColumn_Aggregate() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTsDoubleColumn_Aggregate")
	defer h.Close()

	column := timeseries.DoubleColumn("serie_column_double")

	r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
	aggFirst := NewDoubleAggregation(AggFirst, r)
	aggMean := NewDoubleAggregation(AggArithmeticMean, r)
	results, err := column.Aggregate(aggFirst, aggMean)
	if err != nil {
		// handle error
	}
	fmt.Println("first:", results[0].Result().Content())
	fmt.Println("mean:", results[1].Result().Content())
	fmt.Println("number of elements reviewed for mean:", results[1].Count())
	// Output:
	// first: 0
	// mean: 1.5
	// number of elements reviewed for mean: 4
}

func ExampleTimeseriesEntry_BlobColumn() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTimeseriesEntry_BlobColumn")
	defer h.Close()

	column := timeseries.BlobColumn("serie_column_blob")
	fmt.Println("column:", column.Name())
	// Output:
	// column: serie_column_blob
}

func ExampleTsBlobColumn_Insert() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsBlobColumn_Insert")
	defer h.Close()

	column := timeseries.BlobColumn("serie_column_blob")

	// Insert only one point:
	column.Insert(NewTsBlobPoint(time.Now(), []byte("content")))

	// Insert multiple points
	blobPoints := make([]TsBlobPoint, 2)
	blobPoints[0] = NewTsBlobPoint(time.Now(), []byte("content"))
	blobPoints[1] = NewTsBlobPoint(time.Now(), []byte("content_2"))

	err := column.Insert(blobPoints...)
	if err != nil {
		// handle error
	}
}

func ExampleTsBlobColumn_GetRanges() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTsBlobColumn_GetRanges")
	defer h.Close()

	column := timeseries.BlobColumn("serie_column_blob")

	r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
	blobPoints, err := column.GetRanges(r)
	if err != nil {
		// handle error
	}
	for _, point := range blobPoints {
		fmt.Println("timestamp:", point.Timestamp(), "- value:", string(point.Content()))
	}
	// Output:
	// timestamp: 1970-01-01 01:00:10 +0100 CET - value: content_0
	// timestamp: 1970-01-01 01:00:20 +0100 CET - value: content_1
	// timestamp: 1970-01-01 01:00:30 +0100 CET - value: content_2
	// timestamp: 1970-01-01 01:00:40 +0100 CET - value: content_3
}

func ExampleTsBlobColumn_EraseRanges() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTsBlobColumn_EraseRanges")
	defer h.Close()

	column := timeseries.BlobColumn("serie_column_blob")

	r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
	numberOfErasedValues, err := column.EraseRanges(r)
	if err != nil {
		// handle error
	}
	fmt.Println("Number of erased values:", numberOfErasedValues)
	// Output:
	// Number of erased values: 4
}

func ExampleTsBlobColumn_Aggregate() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTsBlobColumn_Aggregate")
	defer h.Close()

	column := timeseries.BlobColumn("serie_column_blob")

	r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
	aggFirst := NewBlobAggregation(AggFirst, r)
	results, err := column.Aggregate(aggFirst)
	if err != nil {
		// handle error
	}
	fmt.Println("first:", string(results[0].Result().Content()))
	// Output:
	// first: content_0
}

func ExampleTimeseriesEntry_Int64Column() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_Int64Column")
	defer h.Close()

	column := timeseries.Int64Column("serie_column_int64")
	fmt.Println("column:", column.Name())
	// Output:
	// column: serie_column_int64
}

func ExampleTsInt64Column_Insert() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsInt64Column_Insert")
	defer h.Close()

	column := timeseries.Int64Column("serie_column_int64")

	// Insert only one point:
	column.Insert(NewTsInt64Point(time.Now(), 3))

	// Insert multiple points
	int64Points := make([]TsInt64Point, 2)
	int64Points[0] = NewTsInt64Point(time.Now(), 3)
	int64Points[1] = NewTsInt64Point(time.Now(), 4)

	err := column.Insert(int64Points...)
	if err != nil {
		// handle error
	}
}

func ExampleTsInt64Column_GetRanges() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTsInt64Column_GetRanges")
	defer h.Close()

	column := timeseries.Int64Column("serie_column_int64")

	r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
	int64Points, err := column.GetRanges(r)
	if err != nil {
		// handle error
	}
	for _, point := range int64Points {
		fmt.Println("timestamp:", point.Timestamp(), "- value:", point.Content())
	}
	// Output:
	// timestamp: 1970-01-01 01:00:10 +0100 CET - value: 0
	// timestamp: 1970-01-01 01:00:20 +0100 CET - value: 1
	// timestamp: 1970-01-01 01:00:30 +0100 CET - value: 2
	// timestamp: 1970-01-01 01:00:40 +0100 CET - value: 3
}

func ExampleTsInt64Column_EraseRanges() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTsInt64Column_EraseRanges")
	defer h.Close()

	column := timeseries.Int64Column("serie_column_int64")

	r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
	numberOfErasedValues, err := column.EraseRanges(r)
	if err != nil {
		// handle error
	}
	fmt.Println("Number of erased values:", numberOfErasedValues)
	// Output:
	// Number of erased values: 4
}

func ExampleTimeseriesEntry_TimestampColumn() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_TimestampColumn")
	defer h.Close()

	column := timeseries.TimestampColumn("serie_column_timestamp")
	fmt.Println("column:", column.Name())
	// Output:
	// column: serie_column_timestamp
}

func ExampleTsTimestampColumn_Insert() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsTimestampColumn_Insert")
	defer h.Close()

	column := timeseries.TimestampColumn("serie_column_timestamp")

	// Insert only one point:
	column.Insert(NewTsTimestampPoint(time.Now(), time.Now()))

	// Insert multiple points
	timestampPoints := make([]TsTimestampPoint, 2)
	timestampPoints[0] = NewTsTimestampPoint(time.Now(), time.Now())
	timestampPoints[1] = NewTsTimestampPoint(time.Now(), time.Now())

	err := column.Insert(timestampPoints...)
	if err != nil {
		// handle error
	}
}

func ExampleTsTimestampColumn_GetRanges() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTsTimestampColumn_GetRanges")
	defer h.Close()

	column := timeseries.TimestampColumn("serie_column_timestamp")

	r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
	timestampPoints, err := column.GetRanges(r)
	if err != nil {
		// handle error
	}
	for _, point := range timestampPoints {
		fmt.Println("timestamp:", point.Timestamp(), "- value:", point.Content())
	}
	// Output:
	// timestamp: 1970-01-01 01:00:10 +0100 CET - value: 1970-01-01 01:00:10 +0100 CET
	// timestamp: 1970-01-01 01:00:20 +0100 CET - value: 1970-01-01 01:00:20 +0100 CET
	// timestamp: 1970-01-01 01:00:30 +0100 CET - value: 1970-01-01 01:00:30 +0100 CET
	// timestamp: 1970-01-01 01:00:40 +0100 CET - value: 1970-01-01 01:00:40 +0100 CET
}

func ExampleTsTimestampColumn_EraseRanges() {
	h, timeseries := MustCreateTimeseriesWithData("ExampleTsTimestampColumn_EraseRanges")
	defer h.Close()

	column := timeseries.TimestampColumn("serie_column_timestamp")

	r := NewRange(time.Unix(0, 0), time.Unix(40, 5))
	numberOfErasedValues, err := column.EraseRanges(r)
	if err != nil {
		// handle error
	}
	fmt.Println("Number of erased values:", numberOfErasedValues)
	// Output:
	// Number of erased values: 4
}

func ExampleTimeseriesEntry_Bulk() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTimeseriesEntry_Bulk")
	defer h.Close()

	bulk, err := timeseries.Bulk(NewTsColumnInfo("serie_column_blob", TsColumnBlob), NewTsColumnInfo("serie_column_double", TsColumnDouble))
	if err != nil {
		// handle error
	}
	fmt.Println("RowCount:", bulk.RowCount())
	// Output:
	// RowCount: 0
}

func ExampleTsBulk_Push() {
	h, timeseries := MustCreateTimeseriesWithColumns("ExampleTsBulk_Push")
	defer h.Close()

	bulk, err := timeseries.Bulk(NewTsColumnInfo("serie_column_blob", TsColumnBlob), NewTsColumnInfo("serie_column_double", TsColumnDouble))
	bulk.Row(time.Now()).Blob([]byte("content")).Double(3.2).Append()
	bulk.Row(time.Now()).Blob([]byte("content 2")).Double(4.8).Append()
	rowCount, err := bulk.Push()
	if err != nil {
		// handle error
		panic(err)
	}
	fmt.Println("RowCount:", rowCount)
	// Output:
	// RowCount: 2
}

func ExampleNode() {
	h := MustSetupHandle(clusterURI, 120*time.Second)
	defer h.Close()

	node := h.Node(nodeURI)

	status, _ := node.Status()
	fmt.Println("Status - Max sessions:", status.Network.Partitions.MaxSessions)

	config, _ := node.Config()
	fmt.Println("Config - Root Depot:", config.Local.Depot.Root)
	fmt.Println("Config - Listen On:", config.Local.Network.ListenOn)

	topology, _ := node.Topology()
	fmt.Println("Topology - Successor is same as predecessor:", topology.Successor.Endpoint == topology.Predecessor.Endpoint)
	// Output:
	// Status - Max sessions: 5000
	// Config - Root Depot: db
	// Config - Listen On: 127.0.0.1:30083
	// Topology - Successor is same as predecessor: true
}

func ExampleQuery() {
	h := MustSetupHandle(clusterURI, 120*time.Second)
	defer h.Close()

	var aliases []string
	aliases = append(aliases, generateAlias(16))
	aliases = append(aliases, generateAlias(16))

	blob := h.Blob("alias_blob")
	blob.Put([]byte("asd"), NeverExpires())
	defer blob.Remove()
	blob.AttachTag("all")
	blob.AttachTag("first")

	integer := h.Integer("alias_integer")
	integer.Put(32, NeverExpires())
	defer integer.Remove()
	integer.AttachTag("all")
	integer.AttachTag("second")

	var obtainedAliases []string
	obtainedAliases, _ = h.Query().Tag("all").Execute()
	fmt.Println("Get all aliases:", obtainedAliases)

	obtainedAliases, _ = h.Query().Tag("all").NotTag("second").Execute()
	fmt.Println("Get only first alias:", obtainedAliases)

	obtainedAliases, _ = h.Query().Tag("all").Type("int").Execute()
	fmt.Println("Get only integer alias:", obtainedAliases)

	obtainedAliases, _ = h.Query().Tag("adsda").Execute()
	fmt.Println("Get no aliases:", obtainedAliases)

	_, err := h.Query().NotTag("second").Execute()
	fmt.Println("Error:", err)

	_, err = h.Query().Type("int").Execute()
	fmt.Println("Error:", err)
	// Output:
	// Get all aliases: [alias_blob alias_integer]
	// Get only first alias: [alias_blob]
	// Get only integer alias: [alias_integer]
	// Get no aliases: []
	// Error: query should have at least one valid tag
	// Error: query should have at least one valid tag
}
arrow_backPrevious
6.6. C++
Next arrow_forward
6.8. Hadoop integration