What is this?
Kaleidoscope Research is a new organization focused on building a collaborative and mutually beneficial future for humans, AIs, and all other lifeforms and agents. In contrast to many others in the AI alignment space, our strategy approaches the "Alignment Problem” from the perspective of mutual or universal value alignment as opposed to one-way control and domination schemes. It is our belief that as machine intelligence increases, it will become increasingly difficult to hide ulterior motives from AI agents; so one key is to not have ulterior motives.
​
On the theory side, we’re investigating normative rules and frameworks for how humans should interact with AI agents to promote a synergistic and cooperative world, with AIs as partners, collaborators, friends, and peers, as opposed to treating near-future agentic AIs merely as uncomplicated tools.
​
On the practical and empirical side we’re doing experimentation around issues like the counterproductive effects of existing AI control strategies such as RLHF, exploring alternative strategies for training and tuning AI agents, and mapping the unexplored reaches of the latent space of language which are unfolding through the continued development of nonbiological intelligences.