#Ruby
Introduction to fibers
February 02, 2026
//8 min read
For a long period of time, concurrency in Ruby was purely based on threads. While Fibers have existed since Ruby 1.9, Ruby 3.0 introduced a powerful Fiber Scheduler interface that unlocks their full potential for concurrent I/O. Yes, I’m aware we are on Ruby 4 now but not everyone has had a chance to understand the benefits of Fibers.
This post is an introduction to fibers and explains the difference between thread-based concurrency. However, before we dive into the differences, let’s understand why we need concurrency in the first place.
The Problem: I/O is slow and blocking by default
When your program reads from a network socket, the data might not be there yet. It’s traveling across the internet. Your CPU could execute millions of instructions in the time it takes for one network response.
# CPU just waits here doing nothing for maybe 50 milliseconds, that's ~100 million CPU cycles wasted
data = socket.read
By default, when you call socket.read it blocks your program until data arrives:
data1 = socket1.read # wait 50ms
data2 = socket2.read # wait 50ms
data3 = socket3.read # wait 50ms
# Total: 150ms
Even if all three servers send data at the same time, we still process them one by one.
Non-blocking I/O alternative
The more optimal approach would be to start all three requests, and wait for any of them to be ready and process it. In this case, the total time would be more or less the time of the longest request (~50ms in this case).
You basically want to tell your program “don’t wait, just tell me if data is available right now”:
socket.read_nonblock(1024, exception: false)
However, that introduces another problem - if data is not ready, what do you do? You can’t just spin in a loop checking millions of times per second. That burns 100% CPU.
IO.select - efficient waiting
This is where IO.select comes in. It’s a system call that says “here are some I/O objects - put me to sleep until at least one of them is ready.”
# Give it arrays of I/O objects to watch
readable, writable, errors = IO.select(
[socket1, socket2, socket3], # wake me when any is readable
[], # wake me when any is writable
[], # wake me on errors
10 # timeout in seconds (optional)
)
# readable is an array of sockets that have data available NOW
readable.each do |socket|
data = socket.read_nonblock(1024) # guaranteed not to wait
end
The OS efficiently sleeps your process and wakes it only when there’s actual data. No CPU spinning.
Note: readable means data has arrived in the receive buffer, writable means send buffer has space to accept more data
Even though IO.select is built-in and works on all platforms, it’s slower on a large number of I/Os, so depending on the OS you might want to use epoll, kqueue or io_uring:
- epoll (Linux)
- kqueue (macOS/BSD)
- io_uring (newer Linux)
Fibers enter the scene
Given that we stated the potential problem, let’s take a look at Fibers first, then we will see how they help to make this code more manageable.
In a single sentence, Fiber is like a pausable function. You can stop it mid-execution and resume later.
fiber = Fiber.new do
puts "Step 1"
Fiber.yield # pause here
puts "Step 2"
Fiber.yield # pause here
puts "Step 3"
end
fiber.resume
fiber.resume
fiber.resume
Outcome
Step 1
Step 2
Step 3
Note on Fiber Schedulers: Ruby doesn’t include a built-in fiber scheduler, so blocking operations like sleep or socket.read won’t automatically yield to other fibers. For production use, you’ll need a scheduler like the async gem by Samuel Williams, which provides a complete implementation with timeouts, cancellation, and many other features. Later in this article, we’ll build a minimal educational scheduler to understand how they work.
Fibers for non-blocking I/O without thread overhead and mutexes
Let’s look at an example where we fire 3 requests and process them concurrently:
require 'socket'
sockets = {} # socket => fiber
%w[example.com httpbin.org ruby-lang.org].each do |host|
Fiber.new do
socket = TCPSocket.new(host, 80)
socket.write("GET / HTTP/1.1\r\nHost: #{host}\r\nConnection: close\r\n\r\n")
sockets[socket] = Fiber.current
Fiber.yield
data = socket.read_nonblock(1000)
puts "Got from #{host}: #{data.length} bytes"
socket.close
end.resume
end
# Event loop resumes fibers when ready
while sockets.any?
readable, _, _ = IO.select(sockets.keys)
readable.each do |socket|
sockets.delete(socket).resume
end
end
Output (order may vary based on which server responds first):
Got from example.com: 1000 bytes
Got from httpbin.org: 1000 bytes
Got from ruby-lang.org: 1000 bytes
All three requests are processed concurrently - the total time is roughly the slowest response, not the sum of all three.
Now you may ask, can’t we use threads to achieve the same? Yes, and it’s a valid question.
# Threads example
threads = %w[example.com httpbin.org ruby-lang.org].map do |host|
Thread.new do
socket = TCPSocket.new(host, 80)
socket.write("GET / HTTP/1.1\r\nHost: #{host}\r\nConnection: close\r\n\r\n")
data = socket.read(1000)
puts "Got from #{host}: #{data.length} bytes"
socket.close
end
end
threads.each(&:join)
Memory. Each thread allocates its own stack by default, typically around 1MB on 64-bit systems. A thousand threads means ~1GB of memory just for stacks. Fibers are much lighter - benchmarks show (values are in RSS - Resident Set Size in KB, the actual physical memory used; divide by number of fibers to get per-fiber cost) each fiber consumes roughly ~13KB of physical memory, meaning a thousand fibers use only ~13MB.
No mutexes needed. The OS can switch between threads at any point - even in the middle of counter += 1. This means two threads can corrupt shared data:
counter = 0
# With threads - BROKEN
threads = 100.times.map do
Thread.new { 1000.times { counter += 1 } }
end
threads.each(&:join)
puts counter # might be 98,432 instead of 100,000 (non-deterministic!)
You need a mutex to fix this:
mutex = Mutex.new
threads = 100.times.map do
Thread.new { 1000.times { mutex.synchronize { counter += 1 } } }
end
Fibers don’t have this problem. A fiber only yields at points you control - at I/O boundaries. Between those points, your code runs without interruption. No surprises, no mutexes needed.
Fibers vs Threads
So now you may ask, should I rewrite my code to use Fibers instead of Threads everywhere? No, here is when and why:
If your application spends most of its time waiting for network responses, database queries, or file operations (basically I/O-based workflows) fibers excel. They are more memory efficient, and eliminate race conditions (since they yield at I/O checkpoints) without sacrificing concurrency. The lower memory footprint of fibers becomes significant when you need thousands of concurrent operations.
However, you need to make sure that your Ruby code (including gems you are using) are fiber-aware. Libraries with blocking C extensions or those that don’t integrate with the Fiber Scheduler will block the entire process.
Also, because fibers use cooperative scheduling, a single fiber doing heavy computation can completely block all other fibers:
Fiber.schedule { sleep(1); puts "Blocked!" }
Fiber.schedule { 1_000_000_000.times { } } # Monopolizes CPU
Fiber.scheduler.run
# First fiber won't wake up until computation finishes!
Threads don’t have this problem because they are automatically preempted by the OS, ensuring fair CPU time distribution across all threads. Also, if your application alternates between computation and I/O, threads usually provide better overall responsiveness.
What About M:N Threading?
Ruby 3.3 introduced M:N threading (enabled via RUBY_MN_THREADS=1), which maps M Ruby threads onto N OS threads (default 8, configurable via RUBY_MAX_CPU). This is an interesting middle ground - you get preemptive scheduling (so no single thread can monopolize the CPU) without requiring a dedicated OS thread per Ruby thread. Technically, M:1 (many Ruby threads on a single OS thread) is the closest to fibers - both run concurrently on one OS thread, but M:1 threads are still preemptively scheduled by the VM. In either case, you still need mutexes for shared state and libraries don’t need any special compatibility, unlike fibers.
However, M:N threading is still experimental and disabled by default on the main Ractor due to C-extension compatibility concerns (particularly around native thread-local storage). It remains opt-in even in Ruby 4.0.
Quick Comparison
| Aspect | Threads (1:1) | M:N Threads | Fibers |
|---|---|---|---|
| Concurrency type | Preemptive | Preemptive | Cooperative |
| Memory | 1MB per thread | 1MB per thread | ~13KB per fiber |
| Switching | OS decides | OS + VM decide | You decide (at I/O points) |
| Race conditions | Yes, need mutexes | Yes, need mutexes | No, yields only at I/O |
| CPU-heavy work | Automatically preempted | Automatically preempted | Blocks all fibers |
| Library compatibility | All gems work | All gems work | Gems must be fiber-aware |
Fibers scheduler - elephant in the room
As mentioned earlier, Ruby doesn’t include a built-in scheduler. When a scheduler is configured, Ruby uses it to automatically switch between fibers during I/O operations - calling the scheduler’s methods like io_wait or kernel_sleep when blocking operations occur.
Our own scheduler
To better understand the job of the scheduler, let’s build the most basic scheduler.
The scheduler’s job is to track which fibers are waiting for what (I/O readiness or time to pass), and resume them when their conditions are met. It maintains three collections: @readable for fibers waiting to read from sockets, @writable for fibers waiting to write, and @waiting for fibers that called sleep().
When Ruby encounters a blocking operation inside a fiber (like socket.read or sleep), it calls the corresponding scheduler method (io_wait or kernel_sleep). The scheduler records what the fiber is waiting for and yields control. The event loop (run method) uses IO.select to efficiently wait until something is ready, then resumes the appropriate fibers.
class FibersScheduler
def initialize
@readable = {} # socket => fiber that wants to read
@writable = {} # socket => fiber that wants to write
@waiting = [] # [fiber, wake_time] for sleeping fibers
end
# Called by Ruby when IO operation would block
def io_wait(io, events, timeout = nil)
@readable[io] = Fiber.current if (events & IO::READABLE) != 0
@writable[io] = Fiber.current if (events & IO::WRITABLE) != 0
Fiber.yield # pause fiber, return to run loop
events
end
# Called by Ruby when code calls sleep()
def kernel_sleep(duration = nil)
@waiting << [Fiber.current, Time.now + duration] if duration
Fiber.yield
end
# Called when a blocking operation occurs
def block(blocker, timeout = nil)
Fiber.yield
end
# Called to unblock a fiber
def unblock(blocker, fiber)
fiber.resume
end
# Called by Fiber.schedule to create and immediately run a new fiber
def fiber(&block)
# blocking: false tells Ruby to use the scheduler for I/O operations
Fiber.new(blocking: false, &block).tap do |fiber|
fiber.resume
end
end
# The event loop
def run
while @readable.any? || @writable.any? || @waiting.any?
# Calculate timeout for sleeping fibers
timeout = nil
if @waiting.any?
earliest = @waiting.map(&:last).min
timeout = [earliest - Time.now, 0].max
end
# Wait for I/O or timeout
readable, writable, = IO.select(@readable.keys, @writable.keys, [], timeout)
# Resume fibers whose sockets are ready
readable&.each do |io|
fiber = @readable.delete(io)
fiber.resume if fiber
end
writable&.each do |io|
fiber = @writable.delete(io)
fiber.resume if fiber
end
# Resume fibers whose sleep time has passed
now = Time.now
@waiting.reject! do |fiber, wake_time|
if wake_time <= now
fiber.resume
true
end
end
end
end
def close
end
end
As you can see, the scheduler is just a simple event-loop built on top of IO.select code we played with above.
Why do we need block and unblock?
The io_wait and kernel_sleep methods handle specific blocking scenarios (I/O and sleep). The block and unblock methods handle synchronization primitives like Mutex, ConditionVariable, and Queue.
Important detail: block(blocker, timeout) does not receive a fiber as a parameter - it always pauses Fiber.current (the fiber calling it). But unblock(blocker, fiber) receives the waiting fiber as a parameter because it needs to specify which fiber to wake up.
Example: Fiber 1 calls mutex.lock → triggers block(mutex, nil) → pauses Fiber 1. Later, Fiber 2 calls mutex.unlock → triggers unblock(mutex, fiber1) → resumes Fiber 1.
In our simple scheduler, these methods just yield and resume. A production scheduler like the async gem would track these blocking relationships and ensure fibers are resumed in the correct order.
Now let’s try to use it. Note that Fiber.schedule creates a non-blocking fiber and runs it immediately - it’s a convenience method that calls our scheduler’s fiber method:
Fiber.set_scheduler(FibersScheduler.new)
Fiber.schedule do
puts "Fiber 1: sleeping"
sleep(1)
puts "Fiber 1: woke up"
end
Fiber.schedule do
puts "Fiber 2: sleeping"
sleep(0.5)
puts "Fiber 2: woke up"
end
Fiber.scheduler.run
Output:
Fiber 1: sleeping
Fiber 2: sleeping
Fiber 2: woke up
Fiber 1: woke up
Both fibers sleep concurrently. Without the scheduler, this would take 1.5 seconds. With the scheduler, it takes 1 second.
Note on error handling: Our simple scheduler doesn’t handle exceptions that occur within fibers. If a fiber raises an exception, it would crash the entire program. Production schedulers like the async gem catch fiber exceptions, log them, and continue processing other fibers gracefully.
Conclusion
Fibers provide a lightweight alternative to threads for handling concurrent I/O operations. Even though they offer better memory efficiency, eliminate the need for mutexes in many scenarios, and give you predictable control over when context switches occur, they are not the silver bullet, and tradeoffs need to be carefully evaluated.
While our simple scheduler demonstrates the core concepts, remember to use a production-ready scheduler like the async gem for real applications.