AI Research Scientist

Maincode


Date: 2 days ago
City: Melbourne, Victoria
Salary: A$150,000 - A$180,000 per year
Contract type: Full time

About Maincode


Maincode is an applied AI lab building systems that move humans forward.

We believe the next frontier of intelligence isn’t automation, it’s amplification. Our mission is to build AI that augments human judgement, helping people make more informed decisions, reducing cognitive friction, and surfacing meaningful context. To get there, we’re not building apps on top of today’s models, we’re building the next generation of model capabilities themselves.


We call this augmented decision intelligence, and it demands new building blocks: new abstractions, new architectures, and new ways for AI to interact with humans at the level of shared reasoning, not just response generation.


We bring together researchers from fields like neuroscience, systems engineering, linguistics, cognitive science, and interaction design, people who want to build intelligence systems that support human agency, not replace it.


If you’ve been independently exploring how today’s models fall short, and you’re eager to invent what’s missing, Maincode is a collider for people like you.


The Role: AI Research Scientist


As an AI Research Scientist at Maincode, your work will live at the model and architecture level, not at the application layer. We're not optimising existing tools, we're trying to discover what doesn't exist yet, but should.


This is applied research in the strongest sense. You don’t need to be a full-stack engineer, but you must be able to make your ideas real, in code. Whether you’re building symbolic scaffolds, reasoning agents, novel token dynamics, or custom planning loops, we care about your ability to prototype, test, and iterate on new mechanisms in a runnable system.


Your outputs won’t be papers. They’ll be behaviours. Demos. Model capabilities that can be experienced, not just described.


You might be:

  • A neuroscientist building computational models of how humans integrate uncertainty over time
  • A linguist experimenting with model representations of pragmatic meaning and context
  • A systems thinker creating dynamic module-swapping architectures for reasoning agents
  • A physicist applying field theory or energy-based principles to symbolic control loops
  • A cognitive scientist re-framing decision theory with value-aligned agent design


What matters is that your ideas about AI are active, and that you’ve already been prototyping, exploring, or questioning the edges of what models can do.


What You’ll Do


  • Invent and test new AI capabilities that push beyond current architectural limits
  • Frame research questions rooted in human-AI collaboration and co-reasoning
  • Prototype new abstractions, agents, scaffolds, symbolic loops, or planning mechanisms in working code
  • Collaborate with engineers, designers, and other scientists to validate ideas in real-world experiments
  • Help define the primitives and interfaces for systems that feel less like tools and more like thinking partners


You Might Be a Fit If


  • You’ve conducted original research in neuroscience, linguistics, cognitive science, systems, physics, or a related field
  • Your work has directly engaged with AI, methodologically, conceptually, or by building working systems
  • You’re not a career software engineer, but you’re fluent enough in code to prototype and iterate your ideas
  • You’ve built experimental systems (no matter how rough) to explore reasoning, memory, attention, or decision-making
  • You’re impatient with incrementalism and want to invent new building blocks, not polish old ones
  • You see AI as a means of amplifying human agency, and you want to build systems that reflect that

Post a CV