golang libraries

Did you have a situation when you lost a ton of time finding a Go library for your need? In theory, you can check lists like Awesome Go or make a choice based on GitHub stars. But Awesome Go contains over 2600 libraries, and popularity is not always the best indicator of library quality. I often thought that it would be great to have a place where I could find just the best and battle-tested libraries I could use in my project. Because we didn’t find such a place with Miłosz, we decided to create it.

Frankenstein Gopher|500

Based on our experience leading multiple Go teams and working on various projects, including complex financial, health, and security, we will recommend tools that could work well for different projects.

In addition to providing a list of libraries, we also want to show you some non-obvious uses for those tools and libraries. However, it’s important to note that most of these tools can be misused. We’ve included some common anti-patterns to help you avoid making those mistakes.

This list is intended to be opinionated. We only wanted to include libraries we used on real production systems. Thanks to that, we recommend just libraries that we are 100% sure about. Unfortunately, our day is limited to 24 hours, so checking all available libraries is impossible.

If you know of any libraries we should include on this list, please let us know in the comments! We will continue to update the list with new findings over time.

HTTP

Routers

As I mentioned in my previous article, it’s generally better to use libraries instead of frameworks for long-term projects. One of the most fundamental components of any service is an HTTP router. While it’s technically possible to build an application without one by using the standard library’s http package, its routing capabilities are limited. Using a dedicated router will make your life much easier.

Anti-pattern: Frameworks in Go

If the library you consider using impacts how you write your domain models, it’s probably a framework, not a router. We recommend using lightweight routers instead. Learn more about the risks of using the framework in my previous article.

By design, router functionality is limited to routing requests to a proper handler. All non-standard functionalities like CORS, CSRF, error handling, HTTP logging, and authorization (that frameworks usually provide) are provided by reusable middlewares. I recommend some in the section on middlewares.

I use one of two router libraries in most projects: Echo or chi. Both of them are great routers, with different characteristics. They work perfectly with OpenAPI code generation.

✅ Echo

GitHub Docs Examples

Compared to chi, Echo does offer a custom *http.Request handler signature. Some people may find it a downside, but I think it helps to write less error-prone HTTP handlers.

If you have been writing Go for a while, you probably made this mistake at least once:

func someHandler(w http.ResponseWriter, r *http.Request) {
	err := foo()
	if err != nil {
		w.WriteHeader(http.StatusBadRequest) 
		// you forgot the return here, bar() will be executed 
	}
 
	bar()
}

Echo makes you return an error:

func someHandler(c echo.Context) error {
	err := foo()
	if err != nil {
		return err
	}
	
	bar()
	
	return c.NoContent(http.StatusNoContent)
}

The advantage of Echo is the ability to define a custom error handler. It’s not possible to do it in the same way with chi.

For detailed usage and examples, please check Echo docs.

✅ Chi

GitHub Docs Examples

Compared to Echo, chi’s handler functions are compatible with the standard library. For some people, it may be an upside; for some, it may be a downside – you should make your own judgment.

What chi does better than Echo is the format of defining routes and grouping. It gives you better control over middleware per path or sub-path.

r.Route("/articles", func(r chi.Router) {
	r.With(paginate).Get("/", ListArticles)
	r.Post("/", CreateArticle)       // POST /articles
	r.Get("/search", SearchArticles) // GET /articles/search
	
	r.Route("/{articleID}", func(r chi.Router) {
		r.Use(ArticleCtx)            // Load the *Article on the request context 
		r.Get("/", GetArticle)       // GET /articles/123 
		r.Put("/", UpdateArticle)    // PUT /articles/123 
		r.Delete("/", DeleteArticle) // DELETE /articles/123
	})
	
	// GET /articles/whats-up
	r.With(ArticleCtx).Get("/{articleSlug:[a-z-]+}", GetArticle)
})

Full source: github.com/go-chi/chi/_examples/rest/main.go

Anti-pattern: You should not choose tools based just on benchmarks

Some developers choose libraries based on the benchmark results. It’s a risky approach because extreme performance optimizations lead to worse API and limited functionalities set. In most cases, performance differences are negligible in real-life use cases.

Even if, for some applications, it may make a difference, for most applications, it doesn’t matter that much. Making just one extra database query or up-scaling a service can make a much more significant difference in performance.

If performance is not absolutely critical for you, you should prefer other characteristics, like the ease of use and number of features.

Middlewares

HTTP middlewares can give you functionalities like CORS, CSRF, error handling, HTTP logging, authorization, etc.

Echo and chi provide their set of middlewares:

Echo middlewares have a different interface, so they can’t be used in chi. Generally speaking, all standard-library compatible middlewares are compatible with chi and echo.

To use standard library-compatible middleware with echo, you need to call echo.WrapMiddleware:

package main
 
import (
	"github.com/go-chi/chi/v5/middleware"
	"github.com/labstack/echo/v4"
)
 
// echo version
func main() {
	e := echo.New() 
	
	// You can use a middleware from chi with echo. 
	e.Use(
		echo.WrapMiddleware(middleware.BasicAuth("realm", map[string]string{
			"admin": "password",
		})), 
	)
	
	e.Logger.Fatal(e.Start(":8080"))
}

If none of them provides the middleware you are looking for, you can check the Awesome Go list. All of them will be compatible with chi, Echo, and servers built just with the standard library. You can also write your own middleware. Check example middlewares for inspiration!

Serving static content

You don’t need any library to serve static content in Go. Since Go 1.16, you can easily embed static files into your Go binary.

Here’s how to do it for Echo and chi:

package main
 
import (
	"embed"
	"net/http"
	
	"github.com/go-chi/chi"
	"github.com/go-chi/chi/v5"
	"github.com/labstack/echo/v4"
)
 
// your static files should be in the static/ directory, for example static/index.html, static/main.js etc.
//
//go:embed static
var staticFs embed.FS
 
// chi version
func main() {
	r := chi.NewRouter()
	
	r.Handle("/static/*", http.StripPrefix("/", http.FileServer(http.FS(staticFs))))
	log.Fatal(http.ListenAndServe(":8080", r))
}
 
// echo version
func main() {
	e := echo.New()
	e.GET("/static/*", echo.WrapHandler(http.StripPrefix("/", http.FileServer(http.FS(staticFs)))))
	e.Logger.Fatal(e.Start(":8080"))
}

After running the server, assets will be available under http://localhost:8080/static/index.html, http://localhost:8080/static/main.js etc.

Anti-pattern: Do not use no-name libraries for trivial functionalities

Do you remember the leftpad JavaScript library? It was 11 lines of code adding padding on the left side of a string.

Some day, the author decided to remove that library. It wouldn’t be a big problem if it wasn’t a dependency of thousands of projects, including Node and Babel.

Serving static content from your web server is one of such trivial functionalities.

OpenAPI

Nobody likes to maintain API contracts manually. It’s annoying and counterproductive to keep multiple boring JSON’s up-to-date. OpenAPI solves this problem with a JavaScript HTTP client and Go HTTP server generated from the provided specification.

This is how an example specification looks like. If you didn’t work with OpenAPI before, you can read more details in my previous article. Here, I focus on tools that we recommend for OpenAPI spec-generated code.

Generating Go server and clients

We do not recommend using the official OpenAPI generator for the Go code. We recommend the oapi-codegen tool instead because of the higher quality of the generated code. It also has more functionalities.

Anti-pattern: Don't try to generate OpenAPI spec from Go code

There are tools that can generate an OpenAPI spec from Go code. We don’t recommend using them.

The entire OpenAPI specification is very rich, and it will be hard to generate everything from Go code. It’s likely that you will need to add something to the OpenAPI spec at some point, and it may be impossible to do it from the Go code.

It’s much easier to generate it the other way around: Go code from OpenAPI spec.

✅ deepmap/oapi-codegen

GitHub Docs Example

oapi-codegen is a great tool that doesn’t just generate models but also the entire router definition, header’s validation, and proper parameters parsing. It works with chi and Echo.

To generate a server, run the following:

oapi-codegen -generate types -o "<OUTPUT DIR>/openapi_types.gen.go" -package "<GO PACKAGE>" "api/openapi/service.yml"
oapi-codegen -generate <TYPE> -o "<OUTPUT DIR>/openapi_api.gen.go" -package "<GO PACKAGE>" "api/openapi/service.yml"

Where <TYPE> for chi should be chi-server, and for Echo just server.

To generate clients:

oapi-codegen -generate types -o "<OUTPUT DIR>/$service/openapi_types.gen.go" -package "<GO PACKAGE>" "api/openapi/service.yml"
oapi-codegen -generate client -o "<OUTPUT DIR>/$service/openapi_client_gen.go" -package "<GO PACKAGE>" "api/openapi/service.yml"

Don’t forget to change <GO PACKAGE> to the desired Go package name and <OUTPUT DIR> to the desired output dir. 😉

Your job on the server side is just to implement the ServerInterface interface, like:

// ServerInterface represents all server handlers.
type ServerInterface interface {
	
	// (GET /trainer/calendar)
	GetTrainerAvailableHours(w http.ResponseWriter, r *http.Request, params GetTrainerAvailableHoursParams)
 
	// (PUT /trainer/calendar/make-hour-available)
	MakeHourAvailable(w http.ResponseWriter, r *http.Request)
 
	// (PUT /trainer/calendar/make-hour-unavailable)
	MakeHourUnavailable(w http.ResponseWriter, r *http.Request)
}

Full source: github.com/ThreeDotsLabs/wild-workouts-go-ddd-example/internal/trainer/ports/openapi_api.gen.go

You can see it in action in the Wild Workouts project.

Bonus: Client for JavaScript/TypeScript

Even if it’s the list of recommended Go libraries, you may need to generate code for the browser.

✅ openapi-generator-cli

GitHub Docs

In that case, we also recommend a non-official library instead of the official one.

In contrast to oapi-codegen, openapi-generator-cli is a Java tool. To avoid any JVM-related issues, we recommend generating clients using Docker:

docker run --rm --env "JAVA_OPTS=-Dlog.level=error" -v "${PWD}:/local" \
  "openapitools/openapi-generator-cli:v6.2.1" generate \
  -i "/local/api/openapi/service.yml" \
  -g javascript \
  -o "/local/web/src/clients/service"

It assumes that the spec is available locally in api/openapi/service.yml.

You can use openapi-generator-cli for TypeScript and other languages as well.

Alternative types of communication

gRPC

gRPC is a technology that can help you with building robust, internal communication between your services (but not only!).

I already described in detail why it’s worth using gRPC for internal communication and how to do it.

I’ll not repeat it here and will focus on the tooling you need.

With gRPC, you have little choice for generating server and client: you should use official tooling. The good news is that you don’t need anything more because it does its job!

✅ protoc

Docs

To generate Go code from .proto files, you need to install protoc and protoc Go Plugin.

A list of supported types can be found in Protocol Buffers Version 3 Language Specification. More complex built-in types like Timestamp can be found in Well-Known Types list.

Messaging

✅ Watermill

GitHub Docs Examples

About four years ago, when working on one of our projects, we found that there is no library that can simplify building message-driven or event-driven applications easily. To make our lives easier, we decided to write a library that will allow us to write event-driven code as easily as writing HTTP services. This is how Watermill was born.

Today, Watermill is one of the most popular Go libraries with almost 5k GitHub stars, +35 contributors, and 10 officially supported Pub/Subs.

Usually, message-broker libraries are very low-level. With Watermill, publishing messages may be as simple as:

publisher.Publish("example.topic", msg)

And subscribing like:

messages, err := subscriber.Subscribe(ctx, "example.topic")
if err != nil {
	panic(err)
}
 
for msg := range messages {
	fmt.Printf("received message: %s, payload: %s\n", msg.UUID, string(msg.Payload))
	msg.Ack()
}

Compared to using just the message broker’s library, Watermill provides support for some higher level functionalities like middlewares, CQRS support, or event-forwarder component (that can be used to stream your messages from an SQL database to the message broker).

Today, Watermill officially supports Kafka, GCP Pub/Sub, NATS, RabbitMQ message brokers (Pub/Subs). It can also listen to and emit events as HTTP hooks, from databases like MySQL/Postgres, BoltDB or Firestore. It can also work with in-memory Go-channel based Pub/Sub.

Database

SQL

There is no golden hammer solution for interacting with SQL databases. The reason is simple: it depends greatly on what kind of data you store.

In some projects, data models are relatively simple. In some, they are very complex. Because of that, I have two libraries to recommend. You should choose one of them based on the requirements of your project.

For projects with straightforward data models, you should check sqlx. For a bit more complex, you should look at SQLBoiler.

Tactic: Using ORM

I hear more and more that using ORM is not a good idea. I understand a reason for such thinking: many people are hurt by the improper use of ORMs.

It’s like with a knife: I have an extremally sharp Japanese knife without which I can’t imagine cooking. On another side, I need to be very careful with using it. But that fact doesn’t make this knife a bad tool! If you are using it properly, it’s making your life much easier. It’s the same situation with ORMs. Writing queries by hand may be time-consuming and error-prone when your models are complex. ORMs were invented to solve that problem.

If you have a bad experience using ORMs, you should check Things to know about DRY article. The tactics presented in that article will help you to avoid all common problems with ORMs.

Anti-pattern: Avoid weakly typed ORMs

Most ORMs depend heavily on reflection and interface{}/any. The type system is one of the biggest strengths of Go. It helps you build applications efficiently. Resigning from strict typing makes your application more error-prone.

✅ sqlx

GitHub Docs

The standard library’s database/sql package is rather a low-level one. sqlx provides a more convenient and powerful API to work with databases. It includes helper functions for common tasks like inserting and querying data and support for more advanced features like prepared statements and transactions. sqlx also has more advanced support for data unmarshaling (for example to structs, lists of structs, json data, etc.). As a nice bonus, sqlx’s interface is compatible with interfaces from database/sql.

But even if sqlx is a great library, it works well for relatively simple database models. At some level of complexity, you should consider migration to an ORM.

✅ SQLBoiler

GitHub Docs Examples

So far, the only ORM that fully meets our requirements is SQLBoiler. At first, how you define SQLBoiler models may surprise you. Most ORMs generate the database schema out of your Go models. SQLBoiler does the opposite: it generates Go models from your database schema.

This approach has multiple advantages. One of the most important features is stricter typing than other libraries. Thanks to that, many checks are done during compilation. You don’t need to depend on a ton of reflection and magic struct tags. In most cases, as long as the code compiles, it will work correctly.

Generating code from the database schema helps with migration from an existing database because you don’t need to re-write DB models: SQLBoiler generates them for you. So if you start with sqlx and move to SQLBoiler later, the migration should be pretty easy.

SQLBoiler supports PostgreSQL, MySQL, MSSQLServer 2012+, SQLite3, and CockroachDB.

Anti-pattern: Using database models in the API responses

As long as you’re not writing a stupid simple CRUD application (and the chances are you’re not), you should not couple your database models with the API responses.

At some point, requirements will force you to return data in a format different from the format you have in the database. Instead of trying to follow DRY at all costs, it’s time to split your models.

You can read more on this in “Business Applications in Go: Things to know about DRY” article and “Common Anti-Patterns in Go Web Applications”.

Migrations

SQLBoiler and sqlx don’t provide out-of-the-box support for migrations. It’s okay because you are not forced to use any particular solution.

In my recent projects, I used both sql-migrate and goose, and I was happy about them.

✅ sql-migrate

GitHub Docs

✅ goose

GitHub Docs

We like sql-migrate and goose because of their simplicity and flexibility. sql-migrate and goose can be executed as CLI tools and as part of your service.

I like to embed it into the binary of the service. Thanks to that, the migration is executed when the service starts, and it keeps the setup simple. It’s also much less complex to run. For example, sql-migrate with go:embed:

// migrations/run.go
package migrations
 
import (
	"database/sql"
	"embed"
	
	migrate "github.com/rubenv/sql-migrate"
)
 
//go:embed *
var migrationsFiles embed.FS
 
func Run(postgresConn string) error {
	db, err := sql.Open("postgres", postgresConn)
	if err != nil {
		return err
	}
	
	migrations := &migrate.EmbedFileSystemMigrationSource{
		FileSystem: migrationsFiles, 
		Root:       ".",
	}
	
	if _, err := migrate.Exec(db, "postgres", migrations, migrate.Up); err != nil {
		return err
	}
	
	return nil
}

Put your migrations in migrations/, for example: migrations/1_init.sql, migrations/2_alter_some_table.sql, etc. Then run Run in your main.

Observability

Logging

The standard library’s logger doesn’t provide essential features like log levels and output formatting.

For logging, we can recommend two libraries: Logrus and zap. In contrast to zap, Logrus provides a bit nicer user API, but zap is faster.

You can check detailed benchmarks in zap’s readme.

Anti-pattern: You should not choose tools based just on benchmarks

Some developers tend to choose libraries based on the benchmark results. It’s a risky approach because extreme performance optimizations lead to worse API and limited functionalities set. In most cases, performance differences are negligible in real-life use cases.

Even if, for some applications, it may make a difference, for most applications, it doesn’t matter that much. Making just one extra database query or up-scaling a service can make a much more significant difference in performance.

If performance is not absolutely critical for you, you should prefer other characteristics, like the ease of use and number of features.

✅ Logrus

GitHub Docs

✅ zap

GitHub Docs

Metrics and tracing

✅ opencensus-go

GitHub Docs

OpenCensus Go is a library that helps you add metrics and tracing to your endpoints or database queries. The integration uses middleware/decorator patterns, and it doesn’t require a lot of custom code. It supports HTTP endpoints, gRPC endpoints, SQL databases, MongoDB, etc.

You can export traces and metrics to Prometheus, OpenZipkin, GCP Stackdriver Monitoring, Jaeger, AWS X-Ray, Datadog, Graphite, Honeycomb, or New Relic.

Configuration

Go’s standard library doesn’t support much more configuration options than the flag package. Even if it’s enough for simple CLI tools, you may need a bit more for building services.

Env variables

✅ caarlos0/env

GitHub Docs

This library should provide everything you need for configuration for most applications. Compared to the standard library, it does support loading envs to structs and setting env defaults. It helps to save a lot of boilerplate for bigger configurations. It also supports embedded structs, so you can compose bigger a configuration from independent components.

Tactic: Use env variables for your services configuration

For most applications, environment variables should be good enough as configuration.

Configuration is where you should keep secrets and things that differ between environments. If your configuration is massive and does not change often, it may be worth hardcoding it instead. It’s much more pragmatic than having tens of never-changing configuration options.

Multi-format configuration

✅ koanf

GitHub Docs

Koanf is an excellent tool if your project requires multiple configuration formats. It’s often the case when you write tools that are used externally (for example, CLI tools).

This is my most recent finding. Compared to other more popular libraries, koanf just does loading multi-format configuration right. Bonus points for a nice abstraction that allows extending parsing and loading.

Koanf does support the most important configuration formats, like json, yaml, dotenv, env vars, or hcl. They can be loaded from the filesystem, flags, and multiple external sources like s3, vault, etcd, or consul.

Building CLI

Building CLI libraries

✅ urfave/cli

GitHub Docs Examples

We like urfave/cli because of its simple interface and extensibility. We used it in multiple projects without any issues.

Compared to other alternatives, it offers a big-enough feature set while keeping the library lightweight.

Testing

Assertions

✅ testify

GitHub Docs

testify became the standard assertion library, and I’ve seen it in every project I worked on. It provides asserts for the most common cases and also some more complex. One of testify’s key features are friendly messages for all failed asserts. It makes writing and debugging tests much faster.

The library provides two ways of asserting:

  • assert from github.com/stretchr/testify/assert - the test continues after failure. You should use it when you want to see multiple errors (not just the first one). Works when called in a goroutine.
  • require from github.com/stretchr/testify/require - the test is interrupted after failure. You should use it when some critical condition was not met and continuing doesn’t make any sense (for example: storing to database failed). Doesn’t work when called in a goroutine.

Some example asserts:

Tactic: Use assert messages just if it is really needed

I’ve seen people who obsessively write messages for all failed assert.

For example:

assert.Equal(t, 123, 321, "123 is not equal to 321")

will give output:

Error:         Not equal: 
                expected: 123
                actual  : 321
Test:          TestEqual
Messages:      123 is not equal to 321

As you can see, the message doesn’t add anything more than testify would figure out. It can even be harmful because with time, you will need to spend a lot of time to keep the message up to date.

In most cases, the message provided by testify will be good enough. If the test fails, the person who sees the failure will navigate to this test and will understand the reason from the surrounding code.

Tactic: Do not write basic asserts by hand

Many people advocate for writing all asserts by hand. It won’t give you much advantage in the end.

testify is also very smart in showing the difference between the expected and actual value.

For example:

assert.Equal(t, []byte("foo bar baz"), []byte("foo bar 42"))

prints:

           Error:         Not equal: 
                          expected: []byte{0x66, 0x6f, 0x6f, 0x20, 0x62, 0x61, 0x72, 0x20, 0x62, 0x61, 0x7a}
                          actual  : []byte{0x66, 0x6f, 0x6f, 0x20, 0x62, 0x61, 0x72, 0x20, 0x34, 0x32}
                          
                          Diff:
                          --- Expected
                          +++ Actual
                          @@ -1,3 +1,3 @@
                          -([]uint8) (len=11) {
                          - 00000000  66 6f 6f 20 62 61 72 20  62 61 7a                 |foo bar baz|
                          +([]uint8) (len=10) {
                          + 00000000  66 6f 6f 20 62 61 72 20  34 32                    |foo bar 42|
                           }
           Test:          TestEqual
--- FAIL: TestEqual (0.00s)
 
 
Expected :[]byte{0x66, 0x6f, 0x6f, 0x20, 0x62, 0x61, 0x72, 0x20, 0x62, 0x61, 0x7a}
Actual   :[]byte{0x66, 0x6f, 0x6f, 0x20, 0x62, 0x61, 0x72, 0x20, 0x34, 0x32}

It makes no sense to reinvent the wheel and write it from scratch.

Anti-pattern: Do not use test suites from testify

Testify is an excellent library for assertions, but we don’t recommend its test suites. They don’t support parallel sub-tests. They may be fine for unit tests, but for integration/API/E2E tests it’s a deal-breaker.

The standard library can achieve most of the functionalities provided by testify’s test suites. You can see specific examples in this article on testing microservices.

✅ go-cmp

GitHub Docs Examples 1 Examples 2

Sometimes, you must assert a complex struct in your tests skipping some fields. Or the struct contains fields that should be compared in a specific way. Or you need to ignore the slice order or time delta. It’s where go-cmp can help you!

import (
	"github.com/google/go-cmp/cmp"
	"github.com/google/go-cmp/cmp/cmpopts"
)
 
diff := cmp.Diff(
	want, 
	got, 
	// FieldToIgnore and AnotherFieldToIgnore will be ignored in SomeStruct
	cmpopts.IgnoreFields(SomeStruct{}, "FieldToIgnore", "AnotherFieldToIgnore"),
   
	// when comparing time, truncate it to one second
	// can be written for any type
	opt := cmp.Comparer(func(x, y time.Time) bool {
		return x.Truncate(time.Second).Equal(y.Truncate(time.Second))
	})
   
	// sort all []int
	cmpopts.SortSlices(func(x, y int) bool { 
		return x < y 
	}))
)
 
// cmp returns diff if two objects are different
// to check if objects are equal, you can assert if the diff is empty
assert.Empty(t, diff)

To see the list of all available options, I recommend checking the godoc of cmp and cmpopts package.

go-cmp can also be used outside of tests, but be careful – it’s another tool that, used irresponsibly, may hurt your project.

✅ gofakeit

GitHub Docs

If you need more realistic data for your tests, gofakeit helps.

Mocking

Writing mocks by hand

Initially, I recommended one popular mocking tool here. But after some thinking, we decided that the tool is not good enough to recommend. Instead, consider an alternative mocking strategy 👇

Tactic: Consider writing mocks by hand

Even if it sounds like a waste of time, writing mocks yourself may be good enough. Objectively speaking, writing them by hand doesn’t require much more code and time. As a bonus, it gives you much more flexibility.

This is how an example mock can look like:

type BalanceUpdate struct {
	UserID       string
	AmountChange int
}
 
type UserServiceMock struct {
	BalanceUpdates []BalanceUpdate
	balanceUpdatesLock sync.Mutex
}
 
func (u *UserServiceMock) UpdateTrainingBalance(ctx context.Context, userID string, amountChange int) error {
	u.balanceUpdatesLock.Lock()
	defer u.balanceUpdatesLock.Unlock()
	
	u.BalanceUpdates = append(u.BalanceUpdates, BalanceUpdate{userID, amountChange})
	return nil
}

It took me literally 1 minute to write it.

Tactic: Keep your interfaces small, so it's easier to mock them

It’s hard to mock complex types by hand. But if your interface is so complex you can’t write a mock for it, you should reconsider if it needs to be that big. Using mocking libraries obfuscates the real problem.

Try to simplify the type that you are mocking. Maybe the interface segregation principle will help? It could be possible to split this type into multiple smaller types.

It will not only simplify your mocks but will improve your codebase.

Misc

Extra types support

✅ google/uuid

GitHub Docs

This library generates UUIDs.

✅ oklog/ulid

GitHub Docs

UUIDs may be slow to store at a larger scale in relational databases. A solution may be using Universally Unique Lexicographically Sortable Identifier: ULIDs. ULIDs are compatible with UUIDs, are unique enough for large scale, and have shorter string representation (Crockford’s base32). ULIDs are lexicographically sortable, thanks to what building indexes should be much faster.

It’s worth mentioning that UUID v6, v7, and v8 will also be lexicographically sortable. But its spec is still draft when during the release of the article. If you want to try UUID v6 or v7, you can check github.com/gofrs/uuid which does already implement them.

✅ shopspring/decimal

GitHub Docs

Go doesn’t have built-in support for decimals. shopspring/decimal does the job. We have used this library for a couple of years to build a large financial system.

Tactic: Use decimals for monetary values

Floats are not designed to accurately store decimal numbers.

For example:

fmt.Printf("%.16f", 12.1+0.03)
 
> Output: 12.1300000000000008

To make sure your money calculations are correct (and you are not losing or getting extra cents in calculations), we recommend using a decimal type.

It’s also a good idea to use the string representation of decimals instead of floats in transport (in events, API requests and responses, etc.).

Errors

✅ hashicorp/go-multierror

GitHub Docs

Did you ever need to handle an error while you were handling another error? hashicorp/go-multierror is here to help you!

It’s also helpful if an operation can return multiple errors, and you don’t want to return just the first one (for example, validation).

Example use cases:

func validate() {
	var resultErr error
	
	if err := validateFoo(); err != nil {
		resultErr = multierror.Append(resultErr, err)
	}
	if err := validateBar(); err != nil {
		resultErr = multierror.Append(resultErr, err)
	}
	
	return resultErr
}  

or

func ExecuteStuff() error {
	if err := makeStuff(); err != nil {
		if cleanupErr := cleanup(); cleanupErr != nil {
			err = multierror.Append(err, cleanupErr)
		}
		
		return err
	}
	
	return nil
}

Note: Go 1.20 will introduce errors.Join function. After release of Go 1.20 you should consider using it instead.

Useful tools

Misc

✅ samber/lo

GitHub Docs

Lodash-style Go library based on Go 1.18+ Generics. It may be especially useful for you if you are coming to Go from Python and missing some basic slice/map functions.

Some functions that I’m using the most:

Even if some may find it “non-idiomatic”, I find it useful in some cases. It’s similar to using an ORM – as long as such libraries are used responsibly and don’t obfuscate code, they are useful.

So if you find yourself writing code like:

lo.Map(
	lo.Filter(someSlice, func(v SomeType, _ int) bool {
		return v.IsSpecial
	}), 
	func(t SomeType, _ int) string {
		return t.SpecialName()
	},
)

…it’s just better to convert it to a simple, more readable loop. 😉

✅ Task

GitHub Docs

Task is not really a Go library, but it’s a tool written in Go that may be useful for your projects.

It’s an excellent alternative to Makefile. The most important features that it offers are:

It’s a must-have for each of my new projects.

Live code reloading

✅ reflex

GitHub Docs Example

Go doesn’t provide code live-reloading out of the box. But you can achieve it quickly with the reflex library.

Some time ago, Miłosz wrote an article that shows how to create a local environment with Docker and reflex.

Linter

✅ golangci-lint

GitHub Docs

golangci-lint is a linter that aggregates multiple linters and runs them in parallel and does it very fast.

Here’s an example configuration that we use in our projects.

✅ go-cleanarch

GitHub Docs

If you use Clean/Hexagonal Architecture in your project, you can use this linter to ensure that The Dependency Inversion Rule and interaction between packages are kept.

Formatters

✅ go fmt

The standard formatter provided by Go toolchain.

✅ goimports

Docs

Goimports does all that go fmt does, but it also sorts imports of your Go files. It’s one of the tools that you will see widely adopted in most Go projects.

Not everybody knows, but you can also separately group your local imports with the -local flag.

goimports -local "github.com/ThreeDotsLabs/some-repository" -l -w .

✅ gofumpt

GitHub [Docs]

Just for the biggest formatting freaks! Does all that go fmt and goimports do and more!

Personally, I like gofumpt’s formatting decisions.

Example projects

DDD & Clean Architecture

✅ Wild Workouts Go DDD Example application

GitHub

Wild Workouts is an example Go DDD project that we created to show how to build Go applications that are easy to develop, maintain, and fun to work with, It shows a project developed over time and with complex problems to solve. In contrast to other example projects, it was not blindly copied from other languages.

This is the way how we build our services daily. Highly recommended if you are looking for patterns that will allow you to build more complex projects!

Anti-pattern: Low-quality example repositories

Avoid projects that look like over-engineered copies from other programming languages.

People who write such “DDD” projects often just read a couple of articles about it without understanding it correctly and without using it in real-life projects. If you see DDD/Clean Architecture examples without encapsulated domain models (with public fields) and json tags: run! It’s definitely not DDD nor Clean Architecture.

General purpose

✅ Modern Go Application by Márk Sági-Kazár

GitHub

Another example repository that we can recommend. It doesn’t cover patterns like DDD or Clean Architecture but emphasizes infrastructure beats like observability.

Summary

Should we check some library that is not listed here? Please let us know in the comments!