The Complete Guide to Self-Hosting High Performance Object Storage with MinIO

Looking to boost storage scalability and access for unstructured data within your infrastructure? As an experienced systems architect, I highly recommend evaluating MinIO – an open source, Amazon S3 compatible object storage server that can be deployed anywhere.

With the meteoric rise of audio, video, IoT sensor streams, logs and container images, over 80% enterprises now use object stores gain cost and management advantages over traditional filers and block storage arrays. MinIO brings the same scalability, resilience and performance benefits in an easy self-hosted package.

In this comprehensive 2800+ word guide, we will dig deeper into:

  • MinIO architecture and highlight capabilities
  • Step-by-step installation and post-deploy configuration
  • Tips for integrating, scaling and managing objects via SDKs and mc CLI
  • Comparison with other S3 object stores

So let‘s get started on the journey of making MinIO the storage foundation of your cloud or edge infrastructure!

Key Benefits of Leveraging MinIO‘s S3-compatible Object Storage

As per IDC estimates, unstructured data makes up over 90% of enterprise data volume with average growth rate of 55-65% annually. This unbounded influx in variety and velocity of data requires a modern scale-out approach like object storage over siloed file servers.

Here are the key factors that make MinIO enterprise-ready:


MinIO can drive up to 171 GB/s throughput on standard hardware by parallelizing IO across drives. This ensures consistency even at higher scalability by adding nodes.

Cost Savings

Based on MinIO benchmarks published here, self-hosted capacity can cost ~40-60% lesser over 5 years compared AWS S3 IA and Glacier storage classes. Savings multiply as data volume grows.

S3 Compatibility

The Amazon S3 API is widely adopted across tools, platforms and plugins. Conformance to this standard allows MinIO to function as a drop-in replacement for AWS and leverage ecosystem integration support.


MinIO safeguards data integrity through checksum validation, versioning and configurable redundancy schemes. Rebuilding failed drives has near-zero impact thanks to erasure coding.


Granular identity and policy management coupled with TLS encryption for data in transit and at rest gives fine-grained access control. Integrations with KMS and external IDPs further help in hardening.


As per protocols like CNCF‘s Cloud Object Storage Interface spec, MinIO supports multiprotocol access natively or through S3 gateways. This enables hybrid workflows.

I hope this gives you a glimpse of why MinIO scores high on scalability, resilience and ease of use! Now let us proceed to test driving this high performance object server first-hand by self-hosting the software.

Step-by-Step Guide to Install and Run MinIO Server on Linux

Based on experience with numerous deployment models, my recommendation is to go with MinIO standalone or distributed mode on Linux (rather than Docker / Kubernetes abstractions). This keeps infrastructure overhead low while providing direct access to OS resources for optimized IO.

Let‘s begin:

Step 1 – Get Access to Hardware

To follow along this guide, you need either a:

  • Bare metal server with Linux OS. I used Ubuntu 20.04.
  • A VM instance like AWS EC2. 4 vCPU cores with 16 GB RAM works well.
  • Managed hosting plan – Kudos if SSD storage!

Step 2 – Download Latest MinIO Executable

Log into your Linux host as root or user with sudo permissions. We will install the latest MinIO release in /opt/:


Step 3 – Set File Permissions

Mark downloaded binary as executable:

chmod +x minio

Step 4 – Configure Storage Location

MinIO requires a persistent drive or volume to write object data and metadata. This keeps things portable between restarts.

mkdir -p /mnt/data

Step 5 – Launch the Server Process

Finally start the service by pointing to storage directory:

minio server /mnt/data 

This prints management endpoints and default credentials:

AccessKey: minioadmin
SecretKey: minioadmin


As seen above, the web UI, CLI and SDKs can now interface with MinIO using generated keys. Time to create buckets and upload files next!

Creating Buckets and Uploading Test Files

Open the exposed endpoint (http://server_ip:9000) on any modern browser to access MinIO‘s GUI. Login with the credentials shown upon server start.

The main dashboard is empty since no buckets were created yet. Click "+ Create Bucket" to add one called testbucket.

You can now use the upload button to import sample files from local system. As an example, I‘m adding a folder with images and PDF documents. These get transferred as objects into testbucket.

Similarly create other buckets based on application category or access levels to logically separate storage into pools. For large imports, an S3 tool like MinIO‘s mc CLI can be more efficient.

Pro Tip: Buckets and objects can also be managed programmatically via MinIO SDKs for JS, Java, Python etc. This enables custom admin apps and automation.

Now that you have a working S3 setup, let‘s dig deeper into management, scaling and security topics.

Interacting with MinIO Programmatically via Client SDKs