Our Go Stack: The Libraries That Power Our Applications
In our Go projects, we rely on a consistent and battle-tested stack of libraries that help us build reliable, maintainable, and scalable systems.
We started using Go in our stack many years ago (before Go v1) and therefore many of our choices have changed over the years. Here in this post, I wanted to share some of the libraries we use regularly to power our Go apps.
The Go community is famous for advocating the use of the standard Go library and not using much code (search for Go ORM options and you'll see what I mean!). I share some of this tendencies but also would like to take pragmatic choices that deliver better code faster.
Here’s a look at some of the core libraries we use regularly and how they fit into our architecture.
Viper: Configuration Management Made Simple
https://github.com/spf13/viper
Viper is our go-to solution for managing application configuration. It supports reading from JSON, TOML, YAML, environment variables, flags, and more. We use it to load environment-specific settings and centralize configuration logic, making our services flexible and portable.
One side of configuration of an application is where the configuration comes from: a file, environment variables, or perhaps command line parameters. The other side of this question is where to store those values. Long time ago, we used global configuration storage (usually in our utils
package). While this approach might have some benefits (strongly typed configuration values for example) it can be complex when it comes to loading the configuration into this global variable.
Since adopting Viper, configuration management has become much simpler. But I love about Viper the most is how it automatically loads configuration from config files, environment variables, and if used with Cobra, command line params with ease.
package main
import (
"fmt"
"github.com/spf13/viper"
)
func main() {
// Set default values
viper.SetDefault("port", 8080)
viper.SetDefault("env", "development")
// Automatically read from environment variables
viper.AutomaticEnv()
// Access values
port := viper.GetInt("port")
env := viper.GetString("env")
fmt.Printf("Running in %s mode on port %d
", env, port)
}
Cobra: CLI with Ease
https://github.com/spf13/cobra
Cobra pairs perfectly with Viper and is our preferred tool for building robust command-line interfaces. It enables us to create structured commands, subcommands, and flags with minimal boilerplate. We use Cobra to implement service entry points, management utilities, and operational tools.
A note on returning errors
We almost always use RunE
instead of Run
. RunE
can return an error but we always make sure to return errors only when the error is related to the command line and not the execution. For example errors related to validation of input command arguments are returned as errors of RunE
while execution errors are not returned as errors but only logged and displayed to the user.
Uber Fx: Dependency Injection and Lifecycle
Fx provides a powerful yet opinionated dependency injection system. It wires up our services, handles startup and shutdown sequences, and makes our application modules decoupled and testable. Fx's lifecycle hooks are particularly useful for managing background routines and graceful shutdowns.
Here’s a simplified example of how we structure our Fx applications:
// main.go
package main
import (
"context"
"log"
"net/http"
"go.uber.org/fx"
)
func main() {
app := fx.New(
fx.Provide(
NewMux,
NewHTTPServer,
),
fx.Invoke(StartHTTPServer),
)
if err := app.Start(context.Background()); err != nil {
log.Fatal(err)
}
<-app.Done()
if err := app.Stop(context.Background()); err != nil {
log.Fatal(err)
}
}
func NewMux() *http.ServeMux {
mux := http.NewServeMux()
mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("ok"))
})
return mux
}
func NewHTTPServer(lc fx.Lifecycle, mux *http.ServeMux) *http.Server {
srv := &http.Server{
Addr: ":8080",
Handler: mux,
}
lc.Append(fx.Hook{
OnStart: func(ctx context.Context) error {
go func() {
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Println("ListenAndServe error:", err)
}
}()
return nil
},
OnStop: func(ctx context.Context) error {
return srv.Shutdown(ctx)
},
})
return srv
}
func StartHTTPServer(*http.Server) {}
This pattern keeps things explicit, testable, and modular. Each component (mux, server, handlers) can be cleanly separated, and fx.Lifecycle
ensures startup and shutdown routines are handled consistently.
I come from a Java and C# programming background so Dependency Injection and Containers are something I'm more comfortable with using. I also know that the use of DIs in Go is limited and there are many developers who don't like them (over complexity or opacity are usually cited as the reasons).
I however think Containers and DI are under used in Go and can benefit your code immensely. Dependency injection makes construction of structs much easier without worrying about dependency cycles and also encourages patterns that are easier to test: use interfaces in your constructors and mock them in your tests.
Now, Uber FX is not strictly a Dependency injection framework (that would be Uber Dig, which FX uses). By leveraging Dig, FX provides DI as well as application lifecycle management.
When starting with FX, there is a bit of learning curve, especially if you haven't used Containers and Dependency injection before. There are Go libraries for DI that are simpler to use (for example https://github.com/d3fvxl/di) and might be fine using them. However what we found is that the investment in learning how to use FX and Dig pays back in the long run.
Fx particularly makes starting and stopping applications easy with support for handling interrupt signals, timeouts and guaranteed shutdowns for services that are running.
Some FX related conventions
Over time, we have developed some conventions in using FX:
Always place lifecycle management logic inside of the constructor
While you can add lifecycle management into Provide
method, we always do this inside of the constructor. Our constructors always begin with New...
in their name.
Constructor parameters
While we always pass in context.Context
as the first parameter of almost all functions in our Go codebase, we don't do that for constructors, if they are used by Fx. However, we always use fx.Lifecycle
as the first parameter of all constructors, whether they need lifecycle management or not, as they might need it in future.
Echo: Lightweight, Fast Web Framework
Go has a rich and powerful HTTP stack both for clients and servers. However we find using a framework like Echo useful as it takes care of a lot of boilerplate code around authentication, CORS, logging and panic and error management. Echo is particularly lightweight and has a good ecosystem when it comes to inbound data validation and error handling.
Common Error Handling
Error handling in web servers is a topic that is beyond the scope of this post. However, over the years we have developed a pattern that helps us with managing the errors in relation to Echo, particularly around the visibility of errors.
When an error occurs in your application there is information you need to log internally so you can track the bugs and debug your code. But you also need to elevate this error back to the user. The error the user sees cannot be as detailed as the one you log, even if it is a fairly common one like "Record not found" from your database. You need to translate that into a 404 error.
For this we use a CommonErrorHandler
, which is integrated into Echo as a centralized error handler. Here's a simplified and anonymized version of how it works:
func CommonErrorHandler(err error, c echo.Context) {
ctx := c.Request().Context()
var echoErr *echo.HTTPError
if errors.As(err, &echoErr) {
log.Ctx(ctx).Warn().Err(echoErr).Msg("Echo error")
_ = c.JSON(echoErr.Code, echoErr)
return
}
if errors.Is(err, gorm.ErrRecordNotFound) {
_ = c.JSON(http.StatusNotFound, map[string]string{"error": "resource not found"})
return
}
trackingCode := generateTrackingCode()
log.Ctx(ctx).Error().Str("tracking_code", trackingCode).Err(err).Msg("Internal server error")
_ = c.JSON(http.StatusInternalServerError, map[string]string{
"error": "unexpected error",
"tracking_code": trackingCode,
})
}
func generateTrackingCode() string {
return "ABC123" // Replace with your own random generator
}
We map specific known errors (like GORM’s ErrRecordNotFound
) to appropriate HTTP codes and fallback to generic 500 errors for unexpected issues. The handler also returns a tracking_code
which we can use to trace logs in production.
You can use this pattern with custom errors in your code. The CommonErrorHandler
can detect these errors and log the inner error while returning the outer error to the user.
I used a map
in this example, but in real projects, we use a struct with strict type control for serializing errors down to the client.
GORM: ORM for Data Persistence
GORM handles our database interactions. It strikes a good balance between flexibility and abstraction. We use GORM's model hooks, migrations, and query builder to manage relational data, particularly with PostgreSQL. Its support for custom types and associations makes it easy to model complex domains.
Use of ORMs is probably the most controversial part of this post. Many Go developers don't like using them and I don't think they are entire wrong. I have developed in Ruby and Rails for a long time (Cloud 66 is mostly a Rails application) and I can saw that ActiveRecord is a piece of art! It makes using the database so easy and well integrated into business logic that you forget how painful this all can be, until you leave Rails.
A lot of this power is also down to the flexibility of Ruby as a programming language. Go has traded that flexibility with other powers, but the by product of this all is that building an ORM in Go is not an easy task and developer experience is always sub-optimal.
Having said all this, I still think for a relatively large application with more than a few domain objects, using an ORM is more helpful than not. We have looked at many options in ORM (including building our own) and finally settled on GORM. GORM handles a lot of use cases and works fairly well across different databases. It requires a good understanding of how it works to avoid issues further down the line (performance and surprises) but it's not too difficult to get it right.
One decision we made when using GORM was to always pair it with a Repository/Service pattern (see DDD patterns for more). This way GORM is confined within Repositories, and we can also implement things like binding DB transactions in GORM with web requests in Echo. We do this by passing a DB transaction in context.Context
which is used by all repositories. This is done inside of an Echo middleware and the transaction is committed when the request reaches the middleware on its way out.
Testify: Testing with Confidence
https://github.com/stretchr/testify
Testify is essential to our testing workflow. We use both its mocking and suite packages. The mock
package helps us create stubbed interfaces and expectations for unit testing, while suite
helps us organize and run grouped tests with shared setup/teardown logic. It keeps our tests expressive and maintainable.
Our use of interfaces in repositories and services, alongside Fx DI make mocking different entities for testing purposes much simpler.
Asynq: Reliable Background Task Processing
https://github.com/hibiken/asynq
Asynq powers our background jobs and task queues. It provides a Redis-backed task processing system with retries, scheduling, and failure handling. We use it to offload non-blocking work such as sending emails, syncing data, or running heavy computations outside the request lifecycle.
As most of our projects use Redis, Asynq is a good choice for background jobs (it requires Redis). However, we are very interested in the progress the River (https://riverqueue.com/) team is making and might replace Asynq with River in our new project stacks.
Structured Logging with Zero Allocations
Zerolog is our logging library of choice for all Go projects. It's fast, efficient, and provides rich structured logging without the performance overhead of traditional loggers. It’s JSON-based by default, which makes it perfect for ingestion into modern log aggregation tools (e.g., Loki, ELK, Datadog).
We set up a logger at application start, often customizing the output format depending on the environment (pretty in dev, JSON in prod). Here's an example:
import (
"os"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
)
func init() {
zerolog.TimeFieldFormat = zerolog.TimeFormatUnix
log.Logger = log.Output(zerolog.ConsoleWriter{Out: os.Stdout})
}
We pass log.Ctx(ctx)
into our services for contextual logging, ensuring that request IDs, user IDs, and other trace info propagate across boundaries cleanly. This has made debugging production issues much easier.
This can be bound to Echo middleware and Async workers so all web transactions and background jobs have a unique ID that can be traced across log lines.
Each of these libraries contributes to a cleaner, more robust development experience. Together, they help us ship production-grade services quickly and confidently. In future posts, we’ll explore how we integrate them with real-world patterns and best practices.