Frequently Asked Questions (FAQ)
Getting Started
What is Chameleon?
Chameleon is a Chip (CPU, GPU, TPU, etc.) optimization engine that generates hyper-efficient instructions for a wide range of workloads—from AI and machine learning to video processing, simulations, and embedded systems. It removes traditional software friction by adapting to your platform, workload, and hardware in real-time.
Can I use Chameleon without installing anything?
Yes. The upcoming Chameleon Cloud service lets you run Chameleon directly in your browser. There’s no need to download, install, or configure anything locally. It’s ideal for users who don’t have access to Linux machines or powerful GPUs.
Who is Chameleon for?
- AI/ML Engineers seeking faster model training and inference
- HPC Researchers looking to maximize simulation performance
- Media & Graphics Developers working with shaders, video, or 3D rendering
- Edge & Embedded Engineers operating under strict performance and power constraints
If you rely on GPUs—or want to—we make them far more efficient.
What should I expect to see in the interface and what controls are available?
Use the link below and the button beneath the videos to access the instructions for the Pilot Program.
Platforms & Requirements
What are the system requirements for running Chameleon?
Chameleon will run on most Linux PCs and servers whether they are enterprise class or a laptop. We have not tested Chameleon on Chromeboxes and the wide assortment of edge devices. However, it is designed to optimally utilize limited resources. Use the link below and the buttons beneath the videos to access the recommended specs for the Pilot Program.
Which platforms are supported?
Chameleon supports:
- Linux (over 50 distros, natively tested)
- Windows (in progress)
- Android (in progress)
- Cloud Platforms (via Chameleon Cloud – in progress)
We currently support Nvidia, AMD, and Intel GPUs, with auto-tuned instructions generated for each using SPIR-V and other formats.
Using & Comparing
Is there anything that would change in the system operator when turning Chameleon on and off? How can I test with and without it?
Comparisons of approaches optimizing workloads are built-in. The Chameleon Pilot shows the baseline performance for each included workload. The controls allow showing comparisons between the baseline and three different optimization methods.
Are there any programs or games I could be running?
Workloads for the pilot are built-in. Partners and customers can identify specific workloads that they can provide as GLSL or describe to us as inputs to the Chameleon OaaS offering. Future expansion of Chameleon will include the ability to add and run your own workloads—including applications and games—directly through the platform.
Are there workloads of training we could do?
Same as above. The pilot provides built-in workloads, and partners or customers can provide their own GLSL or natural language descriptions for evaluation.
Performance
What kind of performance gains should I expect?
- Phase 1 – 20x to 60x faster execution over unoptimized code
- Phase 2 – 20× to 1000× faster execution over unoptimized code
- Up to 90% of theoretical GPU performance unlocked
- Major reductions in energy consumption and latency
We’ll provide performance dashboards for cloud users post-run.
What kind of inputs can Chameleon work with?
- Natural language descriptions (e.g., “apply Gaussian blur”)
- GLSL code snippets
- C/C++ kernels
- Shader and media pipelines
- And soon, AI/ML model definitions
We handle complex permutations internally, delivering machine-level instructions tailored to your device.
What makes Chameleon different from compilers or AI tools?
- Chameleon bypasses code entirely, creating machine instructions from meaning (intent).
- It adapts in real-time across devices, drivers, and OS environments.
- It works without training or massive datasets.
- It uses our patented biomimetic computing architecture, not AI guesswork.
Security & Privacy
Is my data safe?
Yes. When using Chameleon Cloud:
- Sessions are temporary and automatically deleted after use.
- All data transmission is encrypted.
- We do not store your input files or model parameters.
For on-prem or enterprise deployments, additional controls and audit options are available.
Pilot & Cloud
What is the Pilot Program?
- The Chameleon Cloud runtime
- Usage dashboards and performance insights
- Feedback channels to shape future features
- Priority onboarding for enterprise or private deployments
You’ll be notified once access is enabled.
What if the download does not work on my machine or in the cloud?
Chameleon is designed to work on over 50 different Linux distributions…
How can I sign up for Chameleon in the cloud?
Click the “Join Our Waitlist” button at the bottom of the page. Enter your email and organization details, and we’ll notify you as soon as cloud access is available.
Datacenter Deployment & Safety
Can Chameleon be safely installed in existing datacenters without risking servers or data?
Yes. Chameleon operates entirely in user space and never interacts with firmware, kernels, BIOS, drivers, or OEM
management layers. While it interacts with standard GPU and system drivers in the same way conventional applications
do, it never alters those drivers or changes system configurations.
What did Amazon (AWS) say about Chameleon’s safety?
AWS—via one of their Premier Partners tasked by Amazon—validated that:
- Chameleon does not touch firmware, kernel modules, BIOS, or drivers.
- It behaves like a standard user-space compute job.
- MindAptiv’s performance and energy metrics matched AWS measurements within milliseconds.
- There is zero system risk and nothing persistent is installed.
This validation is one reason AWS is moving into the next stage of testing with us.
Does the software overwrite kernels or BIOS?
No. Chameleon never interacts with:
- BIOS or UEFI
- Firmware of any kind
- Kernel drivers or modules
- Microcode or OEM provisioning tools
It is strictly a user-space executable.
Does the software need to be matched or tuned for each GPU?
No. Chameleon requires no GPU-specific configuration or per-device tuning. It adapts automatically using capabilities
reported by the GPU driver (via Vulkan/SPIR-V today). To learn more about scaling with multi-GPU support, visit our Supercell web page –> here.
Does Chameleon ever freeze or require a reboot/watchdog cycle?
No. Because Chameleon never enters system-level execution paths, it cannot cause OS instability or GPU driver crashes.
If a job fails, it simply stops—like any normal user-space workload.
What is the risk of bricking a GPU? Is there a recovery process?
Zero risk. Chameleon cannot write to firmware, drivers, or hardware-level memory regions. There is nothing to recover
because nothing persistent can be modified.
Can Chameleon be installed as an extension of OEM or factory kernel management?
It could be offered that way in the future, but it does not need to integrate with OEM kernel or firmware management
systems. Chameleon avoids system-layer coupling to prevent lock-in and simplify deployment.
Has Chameleon been tested on AMD, Intel, or ARM architectures?
Yes. Chameleon’s real-time SPIR-V generation is operational on:
- NVIDIA GPUs (T4, A10G, A100, with H100, H200, B100, and B200 coming soon.)
- AMD GPUs (Radeon RX 6400 with MI-Series, Vega, Instinct coming soon)
- Intel GPUs (GPU emulation on Intel CPUs with various GPUs Coming soon)
- ARM Mali GPUs on Android (Coming in phase 2)
Parity exists across Linux (50+ distros) with Windows and Android coming soon.
What licensing model is used? Is there an operating/patching model?
Current models include:
- Per-job
- Per-node
- Enterprise subscription
- Shared Cost Savings
- OEM and appliance integration
Chameleon requires no agents, no ongoing patch cycles, no firmware updates, and no persistent runtime components
in customer environments.