We're building a cognitive classification engine trained on a 2,048-type personality system that has never been formally validated or computationally modelled. Two founding roles. Both first hires.
The core question is simple: does OPS measure what it claims to measure?
But answering that question requires you to go deep. This is not a role where you show up, run a standard study protocol, and write a paper.
We care less about your degree and more about how your mind works.
Trained OPS typologists watch a person and classify them across 9 binary coins. They’re seeing something — patterns in speech, body language, word choice, facial expressions, vocal tonality, response patterns.
The question is: what exactly are they seeing, and can a model learn to see it too?
This is a multimodal classification problem. Video + audio + language data for hundreds of typed subjects with known type codes. Figure out which signals predict each coin — and eventually build a system that classifies without human observers.
There’s no Kaggle dataset, no benchmark, no existing literature. You’ll define the feature space, the architecture, and the evaluation criteria from scratch.
Strong advantages (not required): affective computing, computational social science, video understanding models, production ML APIs, infra (AWS/GCP, Docker), personality psychology familiarity, startup experience.
Most engineering roles: optimise this ad model, build this CRUD app, fine-tune this LLM. Well-scoped, incremental, someone already defined the problem.
This is different. First engineer at a company building a classification system for something never computationally modelled before. The data is real. The feature space is undefined. The architecture is yours.
You’re not joining an engineering team. You’re building it.