Okay, so I’ve been messing around with this thing called Ray, and I wanted to share my little experiment – I call it “ray fight”. It’s nothing fancy, just me trying to get a handle on how Ray works for distributed computing. I’m no expert, just a guy who likes to tinker.

Getting Started
First, I needed to get Ray installed. It was pretty straightforward, just a simple pip install:
pip install ray
I fired up a Python script and imported the Ray library.
import ray
The Basic Idea
The whole point of my “ray fight” was to simulate a bunch of, let’s call them “fighters,” doing some simple calculations. Each fighter would have a “strength” value, and they’d repeatedly “attack” each other, which just meant doing some math with their strength values. The goal was to see if I could distribute these “fights” across multiple processes using Ray.

Setting up the Fighters
I created a simple Python function to represent a single “fight” round.
@*
def fight_round(fighter1_strength, fighter2_strength):
# Simulate some "damage" calculation

damage = fighter1_strength 0.1 + fighter2_strength 0.05
return damage
See that ? That’s the magic. It tells Ray that this function can be run remotely, in a separate process. I Learned it by looking at the Ray documentation.
Then,I created a bunch of “fighters”.Each fighter would attack many times.

num_fighters = 10
num_rounds = 100
fighter_strengths = [i + 1 for i in range(num_fighters)] # Strengths from 1 to 10

Unleashing the (Distributed) Fury!
Now for the fun part. Instead of running all the fight rounds sequentially in a single process, I used Ray to distribute them. Here’s how the core loop looked:
*() # Initialize Ray
results = []
for i in range(num_rounds):

# Randomly pair up fighters
fighter1_index = i % num_fighters
fighter2_index = (i + 1) % num_fighters
# Submit the fight_round task to Ray
result_id = fight_*(fighter_strengths[fighter1_index], fighter_strengths[fighter2_index])

*(result_id)
# Get all the results
final_results = *(results)
I started Ray with . The fight_*()
part is key. Instead of calling the function directly, I called .remote()
. This doesn't execute the function immediately. Instead, it submits the function call as a task to the Ray cluster. Ray then schedules that task to run on an available worker process (which could be on the same machine or, if I had set it up, on a different machine!).
result_id
is an object reference – kind of like a receipt for the task. I didn't have the actual result yet, just a promise of a result. I collected all these object references in the results
list.

Finally, I used *(results)
. This is where I actually waited for all the tasks to finish. takes a list of object references and returns a list of the actual results. It blocks until all the tasks are complete.
The Aftermath
After running this, I printed out final_results
. It was a list of all the "damage" values from each fight round. The important thing wasn't the numbers themselves, but the fact that Ray had managed to run all these calculations, seemingly in parallel (or at least, concurrently). I could see the potential – if I had a much bigger calculation, splitting it up like this could save a lot of time.
Cleaning Up
Lastly, I used to shut down the Ray runtime. It's good practice to do this when you're finished.
So, that was my "ray fight" experiment. It was a simple example, but it helped me understand the basic mechanics of how Ray distributes tasks. There's a ton more to learn about Ray – object stores, actors, and all sorts of fancy stuff – but this was a good first step for me.