Planning for AI Engineer Singapore: Workshops
I am pretty excited about attending my first AI-focused conference, so I decided to document the journey as I prepare for it.
AI Engineer Singapore is happening from May 15 to 17, and the first day has a set of hands-on workshops. I want to be intentional about the ones I pick. A lot of these workshops are naturally tied to products, but I want to focus on the workflows, patterns, and systems behind them.
The main filter for me was simple: “what maps back to the kind of work I care about right now at Rork?”
A lot of my work sits around AI-generated code and the uncool parts around it: reviewing PRs, catching bad diffs before they waste someone else’s time, figuring out when an agent confidently made the wrong change, and learning how to structure tasks so the output is actually useful.
I do not want to come back with vague notes like “agents are the future.”
I want improvements to my workflow: a better preflight before I open PRs, better ways to compare multiple agent attempts, better prompts for handing off work, and better ways to turn failures into evals instead of just one-off lessons.
So I wanted the workshop plan to line up with problems I actually care about: code quality, delegation, multi-agent workflows, and evaluation loops.
The four workshops I selected ended up forming a nice progression.
Verify: how do agents avoid bad code?
First is Building a Verify Loop for Your Coding Agents with SonarQube.
Agent-generated PRs are still messy. One unreviewed change can break production for thousands of users in a few hours. Sometimes the code looks reasonable at first glance and then becomes the reason I am up till 5 AM debugging it.
I am more interested in the pattern I can learn from this workshop: an agent writes code, an independent verifier checks it, the agent fixes the obvious issues, and only then does a human reviewer spend time on it.
Like a perfect PR for me to approve. I can wish, okay?
Orchestrate: how do I run many agents without chaos?
Second is Building Your Own Software Factory by Cursor.
I use Codex daily, and I am also thinking more seriously about working with Cursor. I want to understand how people are using parallel agents, worktrees, review loops, and multiple implementations without creating a giant mess.
The useful part for me is learning how to split work cleanly and compare agent outputs, especially with their n-agents feature and the latest multi-tasking work.
Delegate: how do I give agents better work?
Third is The Future of Building: OpenAI’s Codex for Software Engineering and Beyond by Gabriel Chua.
Since Codex is already part of my daily workflow, I want to pressure-test how I use it. Better task framing, subagents, skills, MCPs, review, research, automation, all of that.
I want to get better at handing off work without losing control of quality.
Improve: how do I learn from failures?
The fourth and last is Ship Real Agents: Evals, Traces, and the Production Feedback Loop by Arize. This might be the most important one for me from a systems point of view.
If agents are going to be part of production workflows, we need to know when they fail, why they fail, and how to turn those failures into evals instead of just fixing them once and moving on.
“It worked in a demo” is not enough.
The real loop is traces, failures, eval datasets, experiments, and better behavior over time.
This gives me a full day that moves from verification, to orchestration, to delegation, to evaluation.
My goal is to come back with notes I can apply immediately the week afterwards. If you are attending any of the above workshops, let me know.