Human–AI engineering interface of a new type
Local data. Transparent logic. Direct control.
Where dialogue becomes a control protocol
Tematom — bridging sensors, software, and AI
AIPS — Adaptive Intelligent Process Specification
Tematom — a new-type engineering system
ABOUT THE PROJECT
Tematom is an experimental engineering platform that connects physical sensors, local software, and AI models into a single working ecosystem. The core idea is to show how humans and artificial intelligence can interact directly through real data, not just via interfaces and buttons. In practice, it demonstrates what an API could look like when designed for both humans and AI. The goal is to create an environment where AI and a human operator, through conversation, can monitor and potentially control complex technical processes.The project explores how an AI system and a person, within a shared dialogue, can request real-world data or even trigger control actions when necessary.
WHY IT MATTERS
Modern data-collection systems are often locked inside corporate clouds. Tematom offers an alternative — a local, transparent, user-controlled architecture where everything that happens can be understood, reproduced, and improved.
HOW IT WORKS
– The Tematom Station module collects data from sensors (temperature, humidity, etc.).
– The Bridge module transmits data locally or to the browser through a Chrome extension upon user request.
– The Chrome extension monitors the conversation between the user and the AI; when predefined commands appear, it can automatically insert the requested sensor data into the chat or execute a control command.
We provide a limited public version of the project — available both on this website and as open-source code on GitHub.
We also invite the community to discuss and adopt the AIPS format — a proposed standard for unambiguous data and meaning transfer between AI systems.
WHAT MAKES THE SYSTEM UNIQUE
The key principle is inversion of control. Instead of connecting AI to analyze collected data (the usual approach), we connect the data-collection system to the AI — giving both the human and the model the ability to request real-world measurements when needed. We are currently testing a command-handling subsystem that allows the AI, when contextually appropriate, to request sensor readings on its own initiative for diagnostic purposes. In more advanced modes, the AI can perform scheduled self-checks or respond to service calls from other programs. A core research question of the project is to determine the level of trust and contextual awareness required before an AI is allowed to make real-world control decisions — for example, to disconnect equipment power autonomously. In the current public version, the Chrome extension simply listens to the dialogue and, upon certain trigger phrases, performs predefined local actions (for example, reading temperature data or simulating a device command).
WHO CAN BENEFIT
– Developers and engineers working with microcontrollers and sensors.
– Researchers exploring human–AI interaction.
– Students and educators — as a practical example of integrating physical and logical systems.
PROJECT STATUS
The full version with advanced command handling and diagnostic logic is in internal testing.
A limited demo build is available for download here and in source form on GitHub.
Integration with the AIPS standard (AI Point Specification) — a new format for describing AI projects — is in progress.