The problem Instinct AI solves
Machine learning is pivotal in today’s data-driven world, but it remains locked behind complex code, costly infrastructure, and specialized skills. Businesses and developers are often bogged down by the intricacies of setting up ML pipelines, managing data, and iterating models, making it difficult to scale machine learning projects efficiently.
Challenges we ran into
Challenges We Faced and How We Solved Them
-
Challenge: Real-Time Progress Tracking in a Distributed Environment
- Solution: Implemented periodic API calls from the engine to the backend to capture and relay AutoML progress seamlessly, replacing traditional websockets for broader compatibility and stability.
-
Challenge: Maintaining Performance in High-Demand Environments
- Solution: Each project is isolated in its own Docker container, allowing scalable and isolated resource management. This prevents interference between projects and ensures stable, high-speed processing.
-
Challenge: Creating an Intuitive No-Code/Low-Code Canvas Interface
- Solution: We leveraged React-Flow to design a seamless drag-and-drop canvas in Next.js. Iterative UX testing with real users led to refinements that simplified the experience for users of all backgrounds, from data science novices to experts.
-
Challenge: Managing Large Data and Model Storage Efficiently
- Solution: Integrated MongoDB and S3 for structured data and file storage, enabling rapid data retrieval and storage at scale. This ensured secure, organized storage without compromising accessibility.
-
Challenge: Enabling Complex Customization While Keeping it User-Friendly
- Solution: We designed the backend to allow rich configuration options but kept the UI simple, with tooltips, guided steps, and streamlined APIs for customization. This lets users set complex model parameters without overwhelming them.
-
Challenge: Seamless Integration of Multiple Tech Stacks (Next.js, Node.js, FastAPI, H2O)
- Solution: Established a robust microservices architecture where each stack serves a specialized role, communicating via APIs and containerization. This ensures cohesion across technologies, maximizing system efficiency and robustness.
-
Challenge: Delivering Real-Time ML with Minimal Latency
- Solution: Optimized backend and model training processes with H2O’s AutoML,