Package btcec implements support for the elliptic curves needed for bitcoin. Bitcoin uses elliptic curve cryptography using koblitz curves (specifically secp256k1) for cryptographic functions. See http://www.secg.org/collateral/sec2_final.pdf for details on the standard. This package provides the data structures and functions implementing the crypto/elliptic Curve interface in order to permit using these curves with the standard crypto/ecdsa package provided with go. Helper functionality is provided to parse signatures and public keys from standard formats. It was designed for use with btcd, but should be general enough for other uses of elliptic curve crypto. It was originally based on some initial work by ThePiachu, but has significantly diverged since then.
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values. There are two implementations; those suffixed with 'G' are generics, usable for any type, and require a passed-in "less" function to define their ordering. Those without this prefix are specific to the 'Item' interface, and use its 'Less' function for ordering.
Ristretto is a fast, fixed size, in-memory cache with a dual focus on throughput and hit ratio performance. You can easily add Ristretto to an existing system and keep the most valuable data where you need it. This package includes multiple probabalistic data structures needed for admission/eviction metadata. Most are Counting Bloom Filter variations, but a caching-specific feature that is also required is a "freshness" mechanism, which basically serves as a "lifetime" process. This freshness mechanism was described in the original TinyLFU paper 1, but other mechanisms may be better suited for certain data distributions.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package sops manages JSON, YAML and BINARY documents to be encrypted or decrypted. This package should not be used directly. Instead, Sops users should install the command line client via `go get -u go.mozilla.org/sops/v3/cmd/sops`, or use the decryption helper provided at `go.mozilla.org/sops/v3/decrypt`. We do not guarantee API stability for any package other than `go.mozilla.org/sops/v3/decrypt`. A Sops document is a Tree composed of a data branch with arbitrary key/value pairs and a metadata branch with encryption and integrity information. In JSON and YAML formats, the structure of the cleartext tree is preserved, keys are stored in cleartext and only values are encrypted. Keeping the values in cleartext provides better readability when storing Sops documents in version controls, and allows for merging competing changes on documents. This is a major difference between Sops and other encryption tools that store documents as encrypted blobs. In BINARY format, the cleartext data is treated as a single blob and the encrypted document is in JSON format with a single `data` key and a single encrypted value. Sops allows operators to encrypt their documents with multiple master keys. Each of the master key defined in the document is able to decrypt it, allowing users to share documents amongst themselves without sharing keys, or using a PGP key as a backup for KMS. In practice, this is achieved by generating a data key for each document that is used to encrypt all values, and encrypting the data with each master key defined. Being able to decrypt the data key gives access to the document. The integrity of each document is guaranteed by calculating a Message Authentication Code (MAC) that is stored encrypted by the data key. When decrypting a document, the MAC should be recalculated and compared with the MAC stored in the document to verify that no fraudulent changes have been applied. The MAC covers keys and values as well as their ordering.
Package roaring is an implementation of Roaring Bitmaps in Go. They provide fast compressed bitmap data structures (also called bitset). They are ideally suited to represent sets of integers over relatively small ranges. See http://roaringbitmap.org for details.
Package toml is a TOML parser and manipulation library. This version supports the specification as described in https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.5.0.md Go-toml can marshal and unmarshal TOML documents from and to data structures. Go-toml can operate on a TOML document as a tree. Use one of the Load* functions to parse TOML data and obtain a Tree instance, then one of its methods to manipulate the tree. The package github.com/pelletier/go-toml/query implements a system similar to JSONPath to quickly retrieve elements of a TOML document using a single expression. See the package documentation for more information. Package civil implements types for civil time, a time-zone-independent representation of time that follows the rules of the proleptic Gregorian calendar with exactly 24-hour days, 60-minute hours, and 60-second minutes. Because they lack location information, these types do not represent unique moments or intervals of time. Use time.Time for that purpose.
Package rds provides the API client, operations, and parameter types for Amazon Relational Database Service. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique. Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, Db2, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use. This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Amazon RDS API Reference For the alphabetical list of API actions, see API Actions. For the alphabetical list of data types, see Data Types. For a list of common query parameters, see Common Parameters. For descriptions of the error codes, see Common Errors. Amazon RDS User Guide For a summary of the Amazon RDS interfaces, see Available RDS Interfaces. For more information about how to use the Query API, see Using the Query API.
msgp is a code generation tool for creating methods to serialize and de-serialize Go data structures to and from MessagePack. This package is targeted at the `go generate` tool. To use it, include the following directive in a go source file with types requiring source generation: The go generate tool should set the proper environment variables for the generator to execute without any command-line flags. However, the following options are supported, if you need them: For more information, please read README.md, and the wiki at github.com/tinylib/msgp
Package bloom provides data structures and methods for creating Bloom filters. A Bloom filter is a representation of a set of _n_ items, where the main requirement is to make membership queries; _i.e._, whether an item is a member of a set. A Bloom filter has two parameters: _m_, a maximum size (typically a reasonably large multiple of the cardinality of the set to represent) and _k_, the number of hashing functions on elements of the set. (The actual hashing functions are important, too, but this is not a parameter for this implementation). A Bloom filter is backed by a BitSet; a key is represented in the filter by setting the bits at each value of the hashing functions (modulo _m_). Set membership is done by _testing_ whether the bits at each value of the hashing functions (again, modulo _m_) are set. If so, the item is in the set. If the item is actually in the set, a Bloom filter will never fail (the true positive rate is 1.0); but it is susceptible to false positives. The art is to choose _k_ and _m_ correctly. In this implementation, the hashing functions used is murmurhash, a non-cryptographic hashing function. This implementation accepts keys for setting as testing as []byte. Thus, to add a string item, "Love": Similarly, to test if "Love" is in bloom: For numeric data, I recommend that you look into the binary/encoding library. But, for example, to add a uint32 to the filter: Finally, there is a method to estimate the false positive rate of a particular Bloom filter for a set of size _n_: Given the particular hashing scheme, it's best to be empirical about this. Note that estimating the FP rate will clear the Bloom filter.
Package bloom provides data structures and methods for creating Bloom filters. A Bloom filter is a representation of a set of _n_ items, where the main requirement is to make membership queries; _i.e._, whether an item is a member of a set. A Bloom filter has two parameters: _m_, a maximum size (typically a reasonably large multiple of the cardinality of the set to represent) and _k_, the number of hashing functions on elements of the set. (The actual hashing functions are important, too, but this is not a parameter for this implementation). A Bloom filter is backed by a BitSet; a key is represented in the filter by setting the bits at each value of the hashing functions (modulo _m_). Set membership is done by _testing_ whether the bits at each value of the hashing functions (again, modulo _m_) are set. If so, the item is in the set. If the item is actually in the set, a Bloom filter will never fail (the true positive rate is 1.0); but it is susceptible to false positives. The art is to choose _k_ and _m_ correctly. In this implementation, the hashing functions used is murmurhash, a non-cryptographic hashing function. This implementation accepts keys for setting as testing as []byte. Thus, to add a string item, "Love": Similarly, to test if "Love" is in bloom: For numeric data, I recommend that you look into the binary/encoding library. But, for example, to add a uint32 to the filter: Finally, there is a method to estimate the false positive rate of a Bloom filter with _m_ bits and _k_ hashing functions for a set of size _n_: You can use it to validate the computed m, k parameters: or You would expect ActualfpRate to be close to the desired fp in these cases. The EstimateFalsePositiveRate function creates a temporary Bloom filter. It is also relatively expensive and only meant for validation.
Package geoip2 provides an easy-to-use API for the MaxMind GeoIP2 and GeoLite2 databases; this package does not support GeoIP Legacy databases. The structs provided by this package match the internal structure of the data in the MaxMind databases. See github.com/oschwald/maxminddb-golang for more advanced used cases. Example provides a basic example of using the API. Use of the Country method is analogous to that of the City method.
Package jsonnet implements a parser and evaluator for jsonnet. Jsonnet is a domain specific configuration language that helps you define JSON data. Jsonnet lets you compute fragments of JSON within the structure, bringing the same benefit to structured data that templating languages bring to plain text. See http://jsonnet.org/ for a full language description and tutorial.
Package aztables can access an Azure Storage or CosmosDB account. The aztables package is capable of: The Azure Data Tables library allows you to interact with two types of resources: * the tables in your account * the entities within those tables. Interaction with these resources starts with an instance of a client. To create a client object, you will need the account's table service endpoint URL and a credential that allows you to access the account. The clients support different forms of authentication. The aztables library supports any of the `azcore.TokenCredential` interfaces, authorization via a Connection String, or authorization with a Shared Access Signature token. To use an account shared key (aka account key or access key), provide the key as a string. This can be found in your storage account in the Azure Portal under the "Access Keys" section. Use the key as the credential parameter to authenticate the client: Using a Connection String Depending on your use case and authorization method, you may prefer to initialize a client instance with a connection string instead of providing the account URL and credential separately. To do this, pass the connection string to the client's `from_connection_string` class method. The connection string can be found in your storage account in the [Azure Portal][azure_portal_account_url] under the "Access Keys" section or with the following Azure CLI command: Using a Shared Access Signature To use a shared access signature (SAS) token, provide the token at the end of your service URL. You can generate a SAS token from the Azure Portal under Shared Access Signature or use the ServiceClient.GetAccountSASToken or Client.GetTableSASToken() functions. Common uses of the Table service included: * Storing TBs of structured data capable of serving web scale applications * Storing datasets that do not require complex joins, foreign keys, or stored procedures and can be de-normalized for fast access * Quickly querying data using a clustered index * Accessing data using the OData protocol and LINQ filter expressions The following components make up the Azure Data Tables Service: * The account * A table within the account, which contains a set of entities * An entity within a table, as a dictionary The Azure Data Tables client library for Go allows you to interact with each of these components through the use of a dedicated client object. Two different clients are provided to interact with the various components of the Table Service: 1. **`ServiceClient`** - 2. **`Client`** - Entities are similar to rows. An entity has a PartitionKey, a RowKey, and a set of properties. A property is a name value pair, similar to a column. Every entity in a table does not need to have the same properties. Entities are returned as JSON, allowing developers to use JSON marshalling and unmarshalling techniques. Additionally, you can use the aztables.EDMEntity to ensure proper round-trip serialization of all properties. The following sections provide several code snippets covering some of the most common Table tasks, including: * Creating a table * Creating entities * Querying entities Create a table in your account and get a `Client` to perform operations on the newly created table: Creating Entities Querying entities
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package secp256k1 implements optimized secp256k1 elliptic curve operations in pure Go. This package provides an optimized pure Go implementation of elliptic curve cryptography operations over the secp256k1 curve as well as data structures and functions for working with public and private secp256k1 keys. See https://www.secg.org/sec2-v2.pdf for details on the standard. In addition, sub packages are provided to produce, verify, parse, and serialize ECDSA signatures and EC-Schnorr-DCRv0 (a custom Schnorr-based signature scheme specific to Decred) signatures. See the README.md files in the relevant sub packages for more details about those aspects. An overview of the features provided by this package are as follows: It also provides an implementation of the Go standard library crypto/elliptic Curve interface via the S256 function so that it may be used with other packages in the standard library such as crypto/tls, crypto/x509, and crypto/ecdsa. However, in the case of ECDSA, it is highly recommended to use the ecdsa sub package of this package instead since it is optimized specifically for secp256k1 and is significantly faster as a result. Although this package was primarily written for dcrd, it has intentionally been designed so it can be used as a standalone package for any projects needing to use optimized secp256k1 elliptic curve cryptography. Finally, a comprehensive suite of tests is provided to provide a high level of quality assurance. At the time of this writing, the primary public key cryptography in widespread use on the Decred network used to secure coins is based on elliptic curves defined by the secp256k1 domain parameters. This example demonstrates use of GenerateSharedSecret to encrypt a message for a recipient's public key, and subsequently decrypt the message using the recipient's private key.
Package jsonapi provides a serializer and deserializer for jsonapi.org spec payloads. You can keep your model structs as is and use struct field tags to indicate to jsonapi how you want your response built or your request deserialized. What about my relationships? jsonapi supports relationships out of the box and will even side load them in your response into an "included" array--that contains associated objects. jsonapi uses StructField tags to annotate the structs fields that you already have and use in your app and then reads and writes jsonapi.org output based on the instructions you give the library in your jsonapi tags. Example structs using a Blog > Post > Comment structure, jsonapi Tag Reference Value, primary: "primary,<type field output>" This indicates that this is the primary key field for this struct type. Tag value arguments are comma separated. The first argument must be, "primary", and the second must be the name that should appear in the "type" field for all data objects that represent this type of model. Value, attr: "attr,<key name in attributes hash>[,<extra arguments>]" These fields' values should end up in the "attribute" hash for a record. The first argument must be, "attr', and the second should be the name for the key to display in the "attributes" hash for that record. The following extra arguments are also supported: "omitempty": excludes the fields value from the "attribute" hash. "iso8601": uses the ISO8601 timestamp format when serialising or deserialising the time.Time value. Value, relation: "relation,<key name in relationships hash>" Relations are struct fields that represent a one-to-one or one-to-many to other structs. jsonapi will traverse the graph of relationships and marshal or unmarshal records. The first argument must be, "relation", and the second should be the name of the relationship, used as the key in the "relationships" hash for the record. Use the methods below to Marshal and Unmarshal jsonapi.org json payloads. Visit the readme at https://github.com/google/jsonapi
Package toml is a TOML parser and manipulation library. This version supports the specification as described in https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md Go-toml can marshal and unmarshal TOML documents from and to data structures. Go-toml can operate on a TOML document as a tree. Use one of the Load* functions to parse TOML data and obtain a Tree instance, then one of its methods to manipulate the tree. The package github.com/pelletier/go-toml/query implements a system similar to JSONPath to quickly retrieve elements of a TOML document using a single expression. See the package documentation for more information.
Package csvpp implements the IETF CSV++ specification (draft-mscaldas-csvpp-02). CSV++ extends traditional CSV to support arrays and structured fields within cells, enabling complex data representation while maintaining CSV's simplicity. This package wraps encoding/csv and is fully compatible with RFC 4180. CSV++ introduces four field types beyond simple text values: Per draft-02, the default tilde (~) delimiter for empty brackets applies only to top-level (first-level) arrays. Nested arrays MUST explicitly specify a delimiter. These field types are represented by the FieldKind constants: SimpleField, ArrayField, StructuredField, and ArrayStructuredField. Reading CSV++ data: Writing CSV++ data: Use Marshal and Unmarshal for automatic struct mapping with struct tags: The IETF CSV++ specification recommends using specific delimiters for nested structures to avoid conflicts. The recommended progression is: This package uses ~ and ^ as defaults, matching the IETF recommendation. This package wraps encoding/csv and inherits its RFC 4180 compliance. The Reader and Writer types expose the same configuration options: The MaxNestingDepth option (default: 10) limits the depth of nested structures to prevent stack overflow attacks from maliciously crafted input. When CSV files are opened in spreadsheet applications (Excel, Google Sheets, etc.), values beginning with '=', '+', '-', or '@' may be interpreted as formulas. This can lead to security vulnerabilities known as "CSV injection" or "formula injection". Use the HasFormulaPrefix function to detect potentially dangerous values: Note: This package does not automatically escape formula prefixes to preserve data integrity. Applications should implement appropriate escaping based on their specific security requirements and target environments. The package defines the following sentinel errors: Parse errors are wrapped in ParseError, which provides line/column information. Default delimiters follow IETF recommendations: For the complete IETF CSV++ specification, see: https://datatracker.ietf.org/doc/draft-mscaldas-csvpp/
Package easygen is an easy to use universal code/text generator library. It can be used as a text or html generator for arbitrary purposes with arbitrary data and templates. It can be used as a code generator, or anything that is structurally repetitive. Some command line parameter handling code generator are provided as examples, including the Go's built-in flag package, and the viper & cobra package. Many examples have been provided to showcase its functionality, and different ways to use it.
Package vergilevhasi OCR module provides digit recognition for VKN extraction. This is a ZERO-DEPENDENCY implementation that works without: - ONNX Runtime - TensorFlow Lite - Tesseract - Any external tools It uses: - Pure Go image processing - Built-in PDF text/image extraction from the pdfcpu library - Code128 barcode scanning with gozxing library - Feature-based digit recognition with a trained classifier Usage: Package vergilevhasi provides tools for parsing Turkish tax plate (Vergi Levhası) PDF documents. This library extracts structured data from tax plate PDFs issued by the Turkish Revenue Administration (Gelir İdaresi Başkanlığı - GİB). The parser extracts the following information: For PDFs where the VKN is embedded as a barcode image rather than text, use the OCR parser:
Package json implements semantic processing of JSON as specified in RFC 8259. JSON is a simple data interchange format that can represent primitive data types such as booleans, strings, and numbers, in addition to structured data types such as objects and arrays. Marshal and Unmarshal encode and decode Go values to/from JSON text contained within a []byte. MarshalWrite and UnmarshalRead operate on JSON text by writing to or reading from an io.Writer or io.Reader. MarshalEncode and UnmarshalDecode operate on JSON text by encoding to or decoding from a jsontext.Encoder or jsontext.Decoder. Options may be passed to each of the marshal or unmarshal functions to configure the semantic behavior of marshaling and unmarshaling (i.e., alter how JSON data is understood as Go data and vice versa). jsontext.Options may also be passed to the marshal or unmarshal functions to configure the syntactic behavior of encoding or decoding. The data types of JSON are mapped to/from the data types of Go based on the closest logical equivalent between the two type systems. For example, a JSON boolean corresponds with a Go bool, a JSON string corresponds with a Go string, a JSON number corresponds with a Go int, uint or float, a JSON array corresponds with a Go slice or array, and a JSON object corresponds with a Go struct or map. See the documentation on Marshal and Unmarshal for a comprehensive list of how the JSON and Go type systems correspond. Arbitrary Go types can customize their JSON representation by implementing Marshaler, MarshalerTo, Unmarshaler, or UnmarshalerFrom. This provides authors of Go types with control over how their types are serialized as JSON. Alternatively, users can implement functions that match MarshalFunc, MarshalToFunc, UnmarshalFunc, or UnmarshalFromFunc to specify the JSON representation for arbitrary types. This provides callers of JSON functionality with control over how any arbitrary type is serialized as JSON. A Go struct is naturally represented as a JSON object, where each Go struct field corresponds with a JSON object member. When marshaling, all Go struct fields are recursively encoded in depth-first order as JSON object members except those that are ignored or omitted. When unmarshaling, JSON object members are recursively decoded into the corresponding Go struct fields. Object members that do not match any struct fields, also known as “unknown members”, are ignored by default or rejected if RejectUnknownMembers is specified. The representation of each struct field can be customized in the "json" struct field tag, where the tag is a comma separated list of options. As a special case, if the entire tag is `json:"-"`, then the field is ignored with regard to its JSON representation. Some options also have equivalent behavior controlled by a caller-specified Options. Field-specified options take precedence over caller-specified options. The first option is the JSON object name override for the Go struct field. If the name is not specified, then the Go struct field name is used as the JSON object name. JSON names containing commas or quotes, or names identical to "" or "-", can be specified using a single-quoted string literal, where the syntax is identical to the Go grammar for a double-quoted string literal, but instead uses single quotes as the delimiters. By default, unmarshaling uses case-sensitive matching to identify the Go struct field associated with a JSON object name. After the name, the following tag options are supported: omitzero: When marshaling, the "omitzero" option specifies that the struct field should be omitted if the field value is zero as determined by the "IsZero() bool" method if present, otherwise based on whether the field is the zero Go value. This option has no effect when unmarshaling. omitempty: When marshaling, the "omitempty" option specifies that the struct field should be omitted if the field value would have been encoded as a JSON null, empty string, empty object, or empty array. This option has no effect when unmarshaling. string: The "string" option specifies that StringifyNumbers be set when marshaling or unmarshaling a struct field value. This causes numeric types to be encoded as a JSON number within a JSON string, and to be decoded from a JSON string containing the JSON number without any surrounding whitespace. This extra level of encoding is often necessary since many JSON parsers cannot precisely represent 64-bit integers. case: When unmarshaling, the "case" option specifies how JSON object names are matched with the JSON name for Go struct fields. The option is a key-value pair specified as "case:value" where the value must either be 'ignore' or 'strict'. The 'ignore' value specifies that matching is case-insensitive where dashes and underscores are also ignored. If multiple fields match, the first declared field in breadth-first order takes precedence. The 'strict' value specifies that matching is case-sensitive. This takes precedence over the MatchCaseInsensitiveNames option. inline: The "inline" option specifies that the JSON representable content of this field type is to be promoted as if they were specified in the parent struct. It is the JSON equivalent of Go struct embedding. A Go embedded field is implicitly inlined unless an explicit JSON name is specified. The inlined field must be a Go struct (that does not implement any JSON methods), jsontext.Value, map[~string]T, or an unnamed pointer to such types. When marshaling, inlined fields from a pointer type are omitted if it is nil. Inlined fields of type jsontext.Value and map[~string]T are called “inlined fallbacks” as they can represent all possible JSON object members not directly handled by the parent struct. Only one inlined fallback field may be specified in a struct, while many non-fallback fields may be specified. This option must not be specified with any other option (including the JSON name). unknown: The "unknown" option is a specialized variant of the inlined fallback to indicate that this Go struct field contains any number of unknown JSON object members. The field type must be a jsontext.Value, map[~string]T, or an unnamed pointer to such types. If DiscardUnknownMembers is specified when marshaling, the contents of this field are ignored. If RejectUnknownMembers is specified when unmarshaling, any unknown object members are rejected regardless of whether an inlined fallback with the "unknown" option exists. This option must not be specified with any other option (including the JSON name). format: The "format" option specifies a format flag used to specialize the formatting of the field value. The option is a key-value pair specified as "format:value" where the value must be either a literal consisting of letters and numbers (e.g., "format:RFC3339") or a single-quoted string literal (e.g., "format:'2006-01-02'"). The interpretation of the format flag is determined by the struct field type. The "omitzero" and "omitempty" options are mostly semantically identical. The former is defined in terms of the Go type system, while the latter in terms of the JSON type system. Consequently they behave differently in some circumstances. For example, only a nil slice or map is omitted under "omitzero", while an empty slice or map is omitted under "omitempty" regardless of nilness. The "omitzero" option is useful for types with a well-defined zero value (e.g., net/netip.Addr) or have an IsZero method (e.g., time.Time.IsZero). Every Go struct corresponds to a list of JSON representable fields which is constructed by performing a breadth-first search over all struct fields (excluding unexported or ignored fields), where the search recursively descends into inlined structs. The set of non-inlined fields in a struct must have unique JSON names. If multiple fields all have the same JSON name, then the one at shallowest depth takes precedence and the other fields at deeper depths are excluded from the list of JSON representable fields. If multiple fields at the shallowest depth have the same JSON name, but exactly one is explicitly tagged with a JSON name, then that field takes precedence and all others are excluded from the list. This is analogous to Go visibility rules for struct field selection with embedded struct types. Marshaling or unmarshaling a non-empty struct without any JSON representable fields results in a SemanticError. Unexported fields must not have any `json` tags except for `json:"-"`. JSON is frequently used as a data interchange format to communicate between different systems, possibly implemented in different languages. For interoperability and security reasons, it is important that all implementations agree upon the semantic meaning of the data. For example, suppose we have two micro-services. The first service is responsible for authenticating a JSON request, while the second service is responsible for executing the request (having assumed that the prior service authenticated the request). If an attacker were able to maliciously craft a JSON request such that both services believe that the same request is from different users, it could bypass the authenticator with valid credentials for one user, but maliciously perform an action on behalf of a different user. According to RFC 8259, there unfortunately exist many JSON texts that are syntactically valid but semantically ambiguous. For example, the standard does not define how to interpret duplicate names within an object. The v1 encoding/json and encoding/json/v2 packages interpret some inputs in different ways. In particular: The standard specifies that JSON must be encoded using UTF-8. By default, v1 replaces invalid bytes of UTF-8 in JSON strings with the Unicode replacement character, while v2 rejects inputs with invalid UTF-8. To change the default, specify the jsontext.AllowInvalidUTF8 option. The replacement of invalid UTF-8 is a form of data corruption that alters the precise meaning of strings. The standard does not specify a particular behavior when duplicate names are encountered within a JSON object, which means that different implementations may behave differently. By default, v1 allows for the presence of duplicate names, while v2 rejects duplicate names. To change the default, specify the jsontext.AllowDuplicateNames option. If allowed, object members are processed in the order they are observed, meaning that later values will replace or be merged into prior values, depending on the Go value type. The standard defines a JSON object as an unordered collection of name/value pairs. While ordering can be observed through the underlying jsontext API, both v1 and v2 generally avoid exposing the ordering. No application should semantically depend on the order of object members. Allowing duplicate names is a vector through which ordering of members can accidentally be observed and depended upon. The standard suggests that JSON object names are typically compared based on equality of the sequence of Unicode code points, which implies that comparing names is often case-sensitive. When unmarshaling a JSON object into a Go struct, by default, v1 uses a (loose) case-insensitive match on the name, while v2 uses a (strict) case-sensitive match on the name. To change the default, specify the MatchCaseInsensitiveNames option. The use of case-insensitive matching provides another vector through which duplicate names can occur. Allowing case-insensitive matching means that v1 or v2 might interpret JSON objects differently from most other JSON implementations (which typically use a case-sensitive match). The standard does not specify a particular behavior when an unknown name in a JSON object is encountered. When unmarshaling a JSON object into a Go struct, by default both v1 and v2 ignore unknown names and their corresponding values. To change the default, specify the RejectUnknownMembers option. The standard suggests that implementations may use a float64 to represent a JSON number. Consequently, large JSON integers may lose precision when stored as a floating-point type. Both v1 and v2 correctly preserve precision when marshaling and unmarshaling a concrete integer type. However, even if v1 and v2 preserve precision for concrete types, other JSON implementations may not be able to preserve precision for outputs produced by v1 or v2. The `string` tag option can be used to specify that an integer type is to be quoted within a JSON string to avoid loss of precision. Furthermore, v1 and v2 may still lose precision when unmarshaling into an any interface value, where unmarshal uses a float64 by default to represent a JSON number. To change the default, specify the WithUnmarshalers option with a custom unmarshaler that pre-populates the interface value with a concrete Go type that can preserve precision. RFC 8785 specifies a canonical form for any JSON text, which explicitly defines specific behaviors that RFC 8259 leaves undefined. In theory, if a text can successfully jsontext.Value.Canonicalize without changing the semantic meaning of the data, then it provides a greater degree of confidence that the data is more secure and interoperable. The v2 API generally chooses more secure defaults than v1, but care should still be taken with large integers or unknown members. Unmarshal matches JSON object names with Go struct fields using a case-sensitive match, but can be configured to use a case-insensitive match with the "case:ignore" option. This permits unmarshaling from inputs that use naming conventions such as camelCase, snake_case, or kebab-case. By default, JSON object names for Go struct fields are derived from the Go field name, but may be specified in the `json` tag. Due to JSON's heritage in JavaScript, the most common naming convention used for JSON object names is camelCase. The "format" tag option can be used to alter the formatting of certain types. JSON objects can be inlined within a parent object similar to how Go structs can be embedded within a parent struct. The inlining rules are similar to those of Go embedding, but operates upon the JSON namespace. Go struct fields can be omitted from the output depending on either the input Go value or the output JSON encoding of the value. The "omitzero" option omits a field if it is the zero Go value or implements a "IsZero() bool" method that reports true. The "omitempty" option omits a field if it encodes as an empty JSON value, which we define as a JSON null or empty JSON string, object, or array. In many cases, the behavior of "omitzero" and "omitempty" are equivalent. If both provide the desired effect, then using "omitzero" is preferred. The exact order of JSON object can be preserved through the use of a specialized type that implements MarshalerTo and UnmarshalerFrom. Some Go types have a custom JSON representation where the implementation is delegated to some external package. Consequently, the "json" package will not know how to use that external implementation. For example, the google.golang.org/protobuf/encoding/protojson package implements JSON for all google.golang.org/protobuf/proto.Message types. WithMarshalers and WithUnmarshalers can be used to configure "json" and "protojson" to cooperate together. When implementing HTTP endpoints, it is common to be operating with an io.Reader and an io.Writer. The MarshalWrite and UnmarshalRead functions assist in operating on such input/output types. UnmarshalRead reads the entirety of the io.Reader to ensure that io.EOF is encountered without any unexpected bytes after the top-level JSON value. If a type implements encoding.TextMarshaler and/or encoding.TextUnmarshaler, then the MarshalText and UnmarshalText methods are used to encode/decode the value to/from a JSON string. Due to version skew, the set of JSON object members known at compile-time may differ from the set of members encountered at execution-time. As such, it may be useful to have finer grain handling of unknown members. This package supports preserving, rejecting, or discarding such members.
Package bloom provides data structures and methods for creating Bloom filters. A Bloom filter is a representation of a set of _n_ items, where the main requirement is to make membership queries; _i.e._, whether an item is a member of a set. A Bloom filter has two parameters: _m_, a maximum size (typically a reasonably large multiple of the cardinality of the set to represent) and _k_, the number of hashing functions on elements of the set. (The actual hashing functions are important, too, but this is not a parameter for this implementation). A Bloom filter is backed by a BitSet; a key is represented in the filter by setting the bits at each value of the hashing functions (modulo _m_). Set membership is done by _testing_ whether the bits at each value of the hashing functions (again, modulo _m_) are set. If so, the item is in the set. If the item is actually in the set, a Bloom filter will never fail (the true positive rate is 1.0); but it is susceptible to false positives. The art is to choose _k_ and _m_ correctly. In this implementation, the hashing functions used is murmurhash, a non-cryptographic hashing function. This implementation accepts keys for setting as testing as []byte. Thus, to add a string item, "Love": Similarly, to test if "Love" is in bloom: For numeric data, I recommend that you look into the binary/encoding library. But, for example, to add a uint32 to the filter: Finally, there is a method to estimate the false positive rate of a particular Bloom filter for a set of size _n_: Given the particular hashing scheme, it's best to be empirical about this. Note that estimating the FP rate will clear the Bloom filter.
Package expr is an engine that can evaluate expressions. You can pass variables into the expression, which can be of any valid Go type (including structs): Expr uses reflection for accessing and iterating passed data. For example you can pass nested structures without any modification or preparation: You can also pass functions into the expression: All methods of passed struct also available as functions inside expr: If you planning to execute some expression lots times, it's good to parse it first and only one time: Expr package support strict parse mode in which some type checks performed during parsing. To parse expression in strict mode, define all of used variables: Parse function will check used variables, accessed filed, logical operators and some other type checks. If you try to use some undeclared variables, or access unknown field, an error will be returned during paring: Also it's possible to define all used variables and functions using expr.Env and struct: Or with map: Compiled ast can be compiled back to string expression using stringer fmt.Stringer interface: Inside Expr engine there is no distinguish between int, uint and float types (as in JavaScript). All numbers inside Expr engine represented as `float64`. You should remember about it if you use any of binary operators (`+`, `-`, `/`, `*`, etc). Otherwise type remain unchanged.
Package wincred provides primitives for accessing the Windows Credentials Management API. This includes functions for retrieval, listing and storage of credentials as well as Go structures for convenient access to the credential data. A more detailed description of Windows Credentials Management can be found on Docs: https://docs.microsoft.com/en-us/windows/desktop/SecAuthN/credentials-management
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values.
Package hercules contains the functions which are needed to gather various statistics from a Git repository. The analysis is expressed in a form of the tree: there are nodes - "pipeline items" - which require some other nodes to be executed prior to selves and in turn provide the data for dependent nodes. There are several service items which do not produce any useful statistics but rather provide the requirements for other items. The top-level items include: - BurndownAnalysis - line burndown statistics for project, files and developers. - CouplesAnalysis - coupling statistics for files and developers. - ShotnessAnalysis - structural hotness and couples, by any Babelfish UAST XPath (functions by default). The typical API usage is to initialize the Pipeline class: Then add the required analysis: This call will add all the needed intermediate pipeline items. Then link and execute the analysis tree: Finally extract the result: The actual usage example is cmd/hercules/root.go - the command line tool's code. Hercules depends heavily on https://github.com/src-d/go-git and leverages the diff algorithm through https://github.com/sergi/go-diff. Besides, BurndownAnalysis involves File and RBTree. These are low level data structures which enable incremental blaming. File carries an instance of RBTree and the current line burndown state. RBTree implements the red-black balanced binary tree and is based on https://github.com/yasushi-saito/rbtree. Coupling stats are supposed to be further processed rather than observed directly. labours.py uses Swivel embeddings and visualises them in Tensorflow Projector. Shotness analysis as well as other UAST-featured items relies on [Babelfish](https://doc.bblf.sh) and requires the server to be running.
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package secp256k1 implements optimized secp256k1 elliptic curve operations in pure Go. This package provides an optimized pure Go implementation of elliptic curve cryptography operations over the secp256k1 curve as well as data structures and functions for working with public and private secp256k1 keys. See https://www.secg.org/sec2-v2.pdf for details on the standard. In addition, sub packages are provided to produce, verify, parse, and serialize ECDSA signatures and EC-Schnorr-DCRv0 (a custom Schnorr-based signature scheme specific to Decred) signatures. See the README.md files in the relevant sub packages for more details about those aspects. An overview of the features provided by this package are as follows: It also provides an implementation of the Go standard library crypto/elliptic Curve interface via the S256 function so that it may be used with other packages in the standard library such as crypto/tls, crypto/x509, and crypto/ecdsa. However, in the case of ECDSA, it is highly recommended to use the ecdsa sub package of this package instead since it is optimized specifically for secp256k1 and is significantly faster as a result. Although this package was primarily written for dcrd, it has intentionally been designed so it can be used as a standalone package for any projects needing to use optimized secp256k1 elliptic curve cryptography. Finally, a comprehensive suite of tests is provided to provide a high level of quality assurance. At the time of this writing, the primary public key cryptography in widespread use on the Decred network used to secure coins is based on elliptic curves defined by the secp256k1 domain parameters. This example demonstrates use of GenerateSharedSecret to encrypt a message for a recipient's public key, and subsequently decrypt the message using the recipient's private key.
Package types implements concrete types for marshalling to and from the dcrd JSON-RPC commands, return values, and notifications. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://pkg.go.dev/github.com/monetarium/monetarium-node/dcrjson). The types in this package map to the required parts of the protocol as discussed in the dcrjson documentation: To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
Package paa reads and writes PAA (Arma/DayZ) texture files. High-level usage: File structure (simplified): Tags are optional in theory but always present in practice. Tag names are stored as four-byte identifiers (e.g. "CGVA", "CXAM", "GALF", "ZIWS", "SFFO") and map to avg/max/flags/swizzle/offsets. This package writes tags in the canonical order used by BI tools: CGVA, CXAM, GALF (optional), ZIWS (optional), SFFO. Mipmaps: Each mipmap block is: Some files include a trailing “dummy” mipmap (width=0,height=0), but decoding uses SFFO as the authoritative list of mip offsets. DXT payload size is the *encoded* size (BC1/BC3 blocks), not width*height*4. For DXT1: ((w+3)/4)*((h+3)/4)*8 For DXT5: ((w+3)/4)*((h+3)/4)*16 LZO compression: Arma2+ allows per-mip LZO compression for DXT formats. The top bit of the mip width indicates LZO-compressed data. This package uses per-mip LZO only when it reduces size; otherwise the mip is stored raw. Always mask width (width & 0x7FFF) for dimension calculations. SFFO (offset table): SFFO contains 16 uint32 offsets to mipmap blocks, relative to file start. Only as many entries as actual mip levels are filled; remaining entries are 0. The engine can derive offsets without SFFO, but BI tools always write it. Normal maps (_nohq): Arma stores tangent-space normal maps in DXT5 with a swizzle tag: Channels are interpreted as: The runtime reconstructs display channels as: This package can apply the swizzle on encode (e.g. NormalMapSwizzle or hint‑driven swizzle) and will unswizzle on decode when the ZIWS tag matches. For some hints we emit ZIWS but intentionally keep payload unswizzled to avoid double‑swizzle in external tools. Important caveat: some external viewers ignore SWIZTAGG and display raw channels, which makes _nohq appear incorrect even when the data is valid. CGVA/CXAM tag order and encoding: BI tools write these tags in BGRA order (not RGBA). For _nohq, CXAM is always FF FF FF FF. These details affect TexView’s PNG export. EncodeWithOptions writes CGVA/CXAM in BGRA and matches BI behavior. Mipmap defaults: EncodeWithOptions generates a full mip chain down to 4x4 (BI default) unless overridden. GUI textures often disable mips; use EncodeOptions.GenerateMipmaps=false or MaxMipCount=1. Format pitfalls and decisions: See EncodeOptions for knobs controlling quality, swizzle, LZO, and mipmaps. TexConvert.cfg support: The texconfig package provides a default config mirroring BI’s TexConvert.cfg and filename-based hint resolution. Some legacy formats are intentionally rejected because TexView crashes on them (e.g. *_raw, *_draftlco, *_8888).
Package gcs provides an API for building and using a Golomb-coded set filter. A Golomb-Coded Set (GCS) is a space-efficient probabilistic data structure that is used to test set membership with a tunable false positive rate while simultaneously preventing false negatives. In other words, items that are in the set will always match, but items that are not in the set will also sometimes match with the chosen false positive rate. This package currently implements two different versions for backwards compatibility. Version 1 is deprecated and therefore should no longer be used. Version 2 is the GCS variation that follows the specification details in DCP0005: https://github.com/decred/dcps/blob/master/dcp-0005/dcp-0005.mediawiki#golomb-coded-sets. Version 2 sets do not permit empty items (data of zero length) to be added and are parameterized by the following: * A parameter `B` that defines the remainder code bit size * A parameter `M` that defines the false positive rate as `1/M` * A key for the SipHash-2-4 function * The items to include in the set The errors returned by this package are of type gcs.Error. This allows the caller to programmatically determine the specific error by examining the ErrorKind field of the type asserted gcs.Error while still providing rich error messages with contextual information. See ErrorKind in the package documentation for a full list. GCS is used as a mechanism for storing, transmitting, and committing to per-block filters. Consensus-validating full nodes commit to a single filter for every block and serve the filter to SPV clients that match against the filter locally to determine if the block is potentially relevant. The required parameters for Decred are defined by the blockcf2 package. For more details, see the Block Filters section of DCP0005: https://github.com/decred/dcps/blob/master/dcp-0005/dcp-0005.mediawiki#block-filters
Package toml is a TOML parser and manipulation library. This version supports the specification as described in https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md Go-toml can marshal and unmarshal TOML documents from and to data structures. Go-toml can operate on a TOML document as a tree. Use one of the Load* functions to parse TOML data and obtain a Tree instance, then one of its methods to manipulate the tree. The package github.com/pelletier/go-toml/query implements a system similar to JSONPath to quickly retrieve elements of a TOML document using a single expression. See the package documentation for more information.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use Open to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name formats): where all parameters must be escaped or use `Config` and `DSN` to construct a DSN string. The following example opens a database handle with the Snowflake account myaccount where the username is jsmith, password is mypassword, database is mydb, schema is testschema, and warehouse is mywh: The following connection parameters are supported: account <string>: Specifies the name of your Snowflake account, where string is the name assigned to your account by Snowflake. In the URL you received from Snowflake, your account name is the first segment in the domain (e.g. abc123 in https://abc123.snowflakecomputing.com). This parameter is optional if your account is specified after the @ character. If you are not on us-west-2 region or AWS deployment, then append the region after the account name, e.g. “<account>.<region>”. If you are not on AWS deployment, then append not only the region, but also the platform, e.g., “<account>.<region>.<platform>”. Account, region, and platform should be separated by a period (“.”), as shown above. If you are using a global url, then append connection group and "global", e.g., "account-<connection_group>.global". Account and connection group are separated by a dash ("-"), as shown above. region <string>: DEPRECATED. You may specify a region, such as “eu-central-1”, with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using MFA for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection All other parameters are taken as session parameters. For example, TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. :code:`no_proxy=.amazonaws.com` means that AWS S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is NOP; no output is generated. This is intentional for those applications that use the same set of logger parameters not to conflict with glog, which is incorporated in the driver logging framework. In order to enable debug logging for the driver, add a build tag sfdebug to the go tool command lines, for example: For tests, run the test command with the tag along with glog parameters. For example, the following command will generate all acitivty logs in the standard error. Likewise, if you build your application with the tag, you may specify the same set of glog parameters. To get the logs for a specific module, use the -vmodule option. For example, to retrieve the driver.go and connection.go module logs: Note: If your request retrieves no logs, call db.Close() or glog.flush() to flush the glog buffer. Note: The logger may be changed in the future for better logging. Currently if the applications use the same parameters as glog, you cannot collect both application and driver logs at the same time. From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter, e.g., QueryContext, ExecContext. See cmd/selectmany.go for the full example. Queries return SQL column type information in the ColumnType type. The DatabaseTypeName method returns the following strings representing Snowflake data types: Go's database/sql package limits Go's data types to the following for binding and fetching: Fetching data isn't an issue since the database data type is provided along with the data so the Go Snowflake Driver can translate Snowflake data types to Go native data types. When the client binds data to send to the server, however, the driver cannot determine the date/timestamp data types to associate with binding parameters. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake doesn't support the name-based Location types, e.g., America/Los_Angeles. For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this doesn't help reduce memory footprint by itself. Consider Custom JSON Decoder. Experimental: Custom JSON Decoder for parsing Result Set The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. (Private Preview) JWT authentication ** Not recommended for production use until GA Now JWT token is supported when compiling with a golang version of 1.10 or higher. Binary compiled with lower version of golang would return an error at runtime when users try to use JWT authentication feature. To enable this feature, one can construct DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through base64.URLEncoding.EncodeToString() function. On the server side, one can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through base64.StdEncoding.EncodeToString() function. To generate the valid key pair, one can do the following command on the shell script: GET and PUT operations are unsupported.
Code generated by github.com/kjkrol/goke/internal/cmd/gen/blueprints; DO NOT EDIT. Package goke provides a high-performance Entity Component System (ECS) engine designed for data-oriented programming and mechanical sympathy. The engine is built on an Archetype-based storage model using Structure of Arrays (SoA). By storing components of the same type in contiguous memory columns, the engine ensures that entities live "densely" in memory. This layout allows the CPU to leverage hardware prefetching and linear cache access, drastically reducing cache misses compared to traditional Object-Oriented patterns. Entities & Generation-based Recycling: Entities are 64-bit identifiers consisting of an Index (32-bit) and a Generation (32-bit). When an entity is removed, its index returns to a pool and its generation is incremented. This ensures that stale references to deleted entities (ABA problem) are easily detected, while allowing the internal storage to reuse memory slots for dense packing and high cache hit rates. Components as Data Columns: Components are user-defined structs registered within the ComponentsRegistry. The engine treats them as contiguous blocks of memory. By registering a component type, the engine gains metadata (Size and reflect.Type) used to build Archetype Columns. This registry-based approach enables zero-allocation component access and ensures that data of the same type is perfectly aligned for SIMD-like processing speeds. Systems & Execution Plans: Logic is decoupled into Systems. The engine supports both interface-based systems (System interface) and lightweight functional systems (SystemFunc). The order and concurrency of execution are defined via an ExecutionPlan. Thread Safety & Parallelism: The engine allows for synchronous or parallel system execution. While the engine provides the tools for high-performance concurrent processing (RunParallel), it follows a "Power to the Programmer" philosophy: it is the developer's responsibility to ensure that systems running in parallel operate on disjoint component sets to avoid data races. Deferred Commands: To maintain state consistency during system updates, modifications to the world (like adding components or removing entities) are buffered via the SystemCommandBuffer and applied during explicit synchronization points (Sync). Type-Safe Views & Cache-Optimized Queries: Data retrieval is handled through generated View structures. These views provide type safety without reflection overhead during the main loop. By accessing contiguous archetype columns directly, views leverage maximal hardware prefetching. Iteration results are returned through specialized Head and Tail structures, which are architected to maintain optimal throughput and minimize CPU stall cycles. To maintain extreme performance, the engine operates with certain fixed limits: Component Types: The engine supports up to 128 unique component types per registry. This is determined by the ArchetypeMask (2x64-bit fields), ensuring that archetype matching remains a fast, constant-time bitwise operation. Memory Pre-allocation: Archetypes and internal structures are initialized with predefined capacities (configurable via EngineOptions). This reduces early memory fragmentation and minimizes GC pressure during the initial entity burst. Entity Indexing: Entities are 64-bit identifiers, allowing for a virtually unlimited number of entities, constrained only by the available system RAM. View Complexity: Queries support up to 8 simultaneous component types. For more complex filtering, an unlimited number of additional types can be filtered using With/Without constraints (Tags). Prefetching Thresholds: To prevent CPU prefetching degradation, result structures (Head/Tail) are limited to a maximum of 4 pointer fields each. Adhering to this "Rule of 4" ensures the Go compiler and hardware prefetchers can maintain peak throughput during high-frequency iteration. Much of the high-arity query logic is generated to ensure type safety across different component counts. Files matching 'view_gen_*.go' should not be edited manually, as they are overwritten during the generation cycle. Code generated by github.com/kjkrol/goke/internal/cmd/gen/views; DO NOT EDIT.
Package errcode facilitates standardized API error codes. The goal is that clients can reliably understand errors by checking against immutable error codes This godoc documents usage. For broader context, see https://github.com/pingcap/errcode/tree/master/README.md Error codes are represented as strings by CodeStr (see CodeStr documentation). This package is designed to have few opinions and be a starting point for how you want to do errors in your project. The main requirement is to satisfy the ErrorCode interface by attaching a Code to an Error. See the documentation of ErrorCode. Additional optional interfaces HasClientData, HasOperation, Causer, and StackTracer are provided for extensibility in creating structured error data representations. Hierarchies are supported: a Code can point to a parent. This is used in the HTTPCode implementation to inherit HTTP codes found with MetaDataFromAncestors. The hierarchy is present in the Code's string representation with a dot separation. A few generic top-level error codes are provided (see the variables section of the doc). You are encouraged to create your own error codes customized to your application rather than solely using generic errors. See NewJSONFormat for an opinion on how to send back meta data about errors with the error data to a client. JSONFormat includes a body of response data (the "data field") that is by default the data from the Error serialized to JSON. Stack traces are automatically added by NewInternalErr and show up as the Stack field in JSONFormat. Errors can be grouped with Combine() and ungrouped via Errors() which show up as the Others field in JSONFormat. To extract any ErrorCodes from an error, use CodeChain(). This extracts error codes without information loss (using ChainContext).
Package aiengine provides a client library for interacting with the Sakura AI Engine API. This package includes clients and data structures for accessing chat completion functionality. The chat completion functionality generates AI responses to user messages. Basic usage: Package aiengine provides a client library for interacting with the Sakura AI Engine API. This package includes clients and data structures for accessing various AI services such as chat completions, embeddings, and RAG (Retrieval-Augmented Generation) functionality. Basic usage: For chat completions: For embeddings: For RAG operations: Package aiengine provides a client library for interacting with the Sakura AI Engine API. This package includes clients and data structures for accessing embedding functionality. The embedding functionality generates vector representations from text. Basic usage: Package aiengine provides a client library for interacting with the Sakura AI Engine API. This package includes clients and data structures for accessing RAG (Retrieval-Augmented Generation) functionality. The RAG functionality enables document uploading, searching, and chatting. Basic usage: Package aiengine provides a client library for interacting with the Sakura AI Engine API. This package includes clients and data structures for accessing audio transcription functionality. The transcription functionality converts audio files to text. Basic usage: Package aiengine provides a client library for interacting with the Sakura AI Engine API. This package includes clients and data structures for accessing text-to-speech (TTS) functionality. The TTS functionality converts text to audio using the VOICEVOX-compatible interface. Basic usage:
Package errcode facilitates standardized API error codes. The goal is that clients can reliably understand errors by checking against immutable error codes This godoc documents usage. For broader context, see https://github.com/pingcap/errcode/tree/master/README.md Error codes are represented as strings by CodeStr (see CodeStr documentation). This package is designed to have few opinions and be a starting point for how you want to do errors in your project. The main requirement is to satisfy the ErrorCode interface by attaching a Code to an Error. See the documentation of ErrorCode. Additional optional interfaces HasClientData, HasOperation, Causer, and StackTracer are provided for extensibility in creating structured error data representations. Hierarchies are supported: a Code can point to a parent. This is used in the HTTPCode implementation to inherit HTTP codes found with MetaDataFromAncestors. The hierarchy is present in the Code's string representation with a dot separation. A few generic top-level error codes are provided (see the variables section of the doc). You are encouraged to create your own error codes customized to your application rather than solely using generic errors. See NewJSONFormat for an opinion on how to send back meta data about errors with the error data to a client. JSONFormat includes a body of response data (the "data field") that is by default the data from the Error serialized to JSON. Stack traces are automatically added by NewInternalErr and show up as the Stack field in JSONFormat. Errors can be grouped with Combine() and ungrouped via Errors() which show up as the Others field in JSONFormat. To extract any ErrorCodes from an error, use CodeChain(). This extracts error codes without information loss (using ChainContext).
Package gopoet is a library to assist with generating Go code. It includes a model of the Go language that is simpler, and thus easier to work with, than those provided by the "go/ast" and "go/types" packages. It also provides adapter methods to allow simple interoperability with elements from the "go/types" and "reflect" packages. The Go Poet API and functionality is strongly influenced by a similar library for Java (for generating Java code) named Java Poet (https://github.com/square/javapoet). TypeName is the way Go Poet represents Go types. There is API in this package for constructing TypeName instances and for converting type representations from the "go/types" and "reflect" packages to TypeName values. It includes related types for representing function signatures, struct fields, and interface methods (Signature, FieldSpec, and MethodSpec respectively). GoFile is the root type in Go Poet for building a representation of Go language elements. The GoFile represents the file itself. The FileElement (and its various concrete implementations) represent top-level declarations in the file. And types like FieldSpec, InterfaceEmbed, and InterfaceMethod represent the elements that comprise struct and interface type definitions. Statements and expressions are not modeled by the Go Poet API, so function bodies and const and var initializers are represented with a type named CodeBlock. Usage of Go Poet involves constructing a GoFile, filling it with elements, and then using the various WriteGoFile* methods to then translate these models into Go source code. Import statements need not be defined manually. GoFile embeds a type named Imports which assists with managing import statements. It tracks all packages that are referenced, generating import aliases as necessary in the event of conflicts. After all referenced packages have been resolved, gopoet.Imports can then generate the import statements necessary. It also provides API for re-writing various references, to adjust their package qualifier so that references to elements or types in other packages are interpolated into Go source code with the correct qualifiers. The lowest level building blocks for the above API are representations of packages, symbols (references to named package-level elements, like consts, vars, types, and funcs), and method references (like a func symbol, but also includes a type qualifier, not just a package qualifier). Various parts of the API provide methods for accessing/converting to these types. Under the hood, it is packages and symbols that are re-written by an Imports instance to ensure all referenced elements are rendered with the package qualifier (e.g. the package name or associated import alias). Go Poet does not attempt to model Go statements and expressions or provide any way to create structured representations of function and method bodies. This is very similar to Java Poet *except* that Go Poet does not provide a custom mechanism for printing and formatting code. It instead relies on the existing facilities in the "fmt" and "text/template" packages. This package provides several types for modeling elements of the Go language that can then be referenced in code blocks (via "%s" or "%v" format specifiers or as elements of a data value rendered by a template). The CodeBlock type and related methods include API that resembles the various Print* functions in the "fmt" package. Before these are rendered to source code, references to Go elements and types are translated to account for the import statements (and any associated aliases) for the file context into which they are being rendered. Format arguments can also include instances of reflect.Type or even items from the "go/types" package: types.Type, types.Object, and *types.Package. These types of values will result in proper references to these elements when the code is actually rendered. Similarly, templates can be rendered, and the data value supplied to the template will be reconstructed, with any elements therein being first translated to have the right package qualifiers. As described above, code blocks (which represent function and method bodies and initializer expressions) can be rendered from templates and provided data values that the template renders. It is also possible to completely eschew modeling generated code with various elements and to generate a file completely from a template. In this case, you can still get value from Go Poet by using a *gopoet.Imports type to track imported packages and assign aliases, and then render the resulting []gopoet.ImportSpec from your template. Furthermore, the value that the template renders can contain instances of gopoet.TypeName, gopoet.Package, and gopoet.Symbol, just like when rendering code blocks for function bodies Calling imports.QualifyTemplateData(data) will re-write the values in the data value so they are properly qualified per the imported packages. Do this before rendering the template. One limitation of re-writing template data is that it cannot change the *types* of elements except in limited circumstances. For example, a Type from the "go/types" package cannot be converted to a gopoet.TypeName if the reference is a struct field whose type is types.Type (since gopoet.TypeName does not implement types.Type). Because of this, not all referenced types and elements can be re-written so may not be rendered correctly. For this reason, it is recommended to use gopoet.TypeName as the means of referring to types in a template data value, not types.Type or reflect.Type.
Package lru provides generic type and concurrent safe LRU data structures with near O(1) perf and optional time-based expiration support. This example demonstrates creating a new map instance, inserting items into the map, existence checking, looking up an item, causing an eviction of the least recently used item, and removing an item. This example demonstrates creating a new set instance, inserting items into the set, checking set containment, causing an eviction of the least recently used item, and removing an item.
Package lru provides generic type and concurrent safe LRU data structures with near O(1) perf and optional time-based expiration support. This example demonstrates creating a new map instance, inserting items into the map, existence checking, looking up an item, causing an eviction of the least recently used item, and removing an item. This example demonstrates creating a new set instance, inserting items into the set, checking set containment, causing an eviction of the least recently used item, and removing an item.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decred. The errors returned by this package are of type txscript.ErrorKind wrapped by txscript.Error which has full support for the standard library errors.Is and errors.As functions. This allows the caller to programmatically determine the specific error while still providing rich error messages with contextual information. See the constants defined with ErrorKind in the package documentation for a full list.
Package treego implements a generic B-tree data structure in Go. A B-tree is a self-balancing tree data structure that maintains sorted data and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-tree generalizes the binary search tree, allowing for nodes with more than two children. This implementation provides: Example usage: The B-tree is particularly useful for: Performance characteristics: The minimum degree parameter affects performance:
Package toml is a TOML parser and manipulation library. This version supports the specification as described in https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md Go-toml can marshal and unmarshal TOML documents from and to data structures. Go-toml can operate on a TOML document as a tree. Use one of the Load* functions to parse TOML data and obtain a Tree instance, then one of its methods to manipulate the tree. The package github.com/pelletier/go-toml/query implements a system similar to JSONPath to quickly retrieve elements of a TOML document using a single expression. See the package documentation for more information.
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package weightedrand contains a performant data structure and algorithm used to randomly select an element from some kind of list, where the chances of each element to be selected not being equal, but defined by relative "weights" (or probabilities). This is called weighted random selection. This package creates a presorted cache optimized for binary search, allowing for repeated selections from the same set to be significantly faster, especially for large data sets. In this example, we create a Chooser to pick from amongst various emoji fruit runes. We assign a numeric weight to each choice. These weights are relative, not on any absolute scoring system. In this trivial case, we will assign a weight of 0 to all but one fruit, so that the output will be predictable.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decred. The errors returned by this package are of type txscript.ErrorKind wrapped by txscript.Error which has full support for the standard library errors.Is and errors.As functions. This allows the caller to programmatically determine the specific error while still providing rich error messages with contextual information. See the constants defined with ErrorKind in the package documentation for a full list.
Package toml is a TOML parser and manipulation library. This version supports the specification as described in https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md Go-toml can marshal and unmarshal TOML documents from and to data structures. Go-toml can operate on a TOML document as a tree. Use one of the Load* functions to parse TOML data and obtain a Tree instance, then one of its methods to manipulate the tree. The package github.com/pelletier/go-toml/query implements a system similar to JSONPath to quickly retrieve elements of a TOML document using a single expression. See the package documentation for more information.