

Niv-AI, a Tel Aviv-based AI infrastructure startup founded in May 2025, has secured $12 million in seed funding led by Glilot Capital Partners with participation from Grove Ventures, Arc VC, Encoded VC, Leap Forward Ventures, and Aurora Capital Partners. The round arrives as AI data centers confront acute power management bottlenecks where GPU-intensive workloads create millisecond-level electricity spikes forcing operators to maintain 30% unused capacity safety margins costing hundreds of millions annually per facility.
CEO Tomer Timor compares thousands of synchronized Nvidia GPUs to electric kettles switching on simultaneously, generating extreme power surges that conventional metering cannot detect. Historical data center consumption remained stable, but AI training/inference introduces unpredictable fluctuations demanding massive battery/capacitor storage systems as safety buffers—hardware absorbing stranded capacity rather than enabling compute scaling.
Niv-AI eliminates this inefficiency through high-frequency sensors capturing “electrical fingerprints” of AI workloads, feeding real-time data into predictive models dynamically balancing compute against available power headroom. The platform unlocks trapped capacity without reducing GPU utilization or requiring physical infrastructure expansion, achieving millisecond-level optimization before spikes trigger curtailment.
The system deploys sensors measuring power at granular resolution beyond standard meters, creating visibility into per-GPU consumption patterns across training clusters serving Llama-scale models. AI models predict fluctuation trajectories analyzing workload characteristics (transformer layer activations, batch sizes, attention head utilization) against grid capacity constraints, preemptively rescheduling non-critical inference to low-demand windows while prioritizing latency-sensitive serving.
Orchestration engine maintains production SLAs through constraint-aware scheduling: latency-critical inference (chatbot responses, recommendation ranking) receives guaranteed headroom; training jobs tolerate micro-preemptions across node ensembles; batch processing migrates fluidly between availability zones. This contrasts reactive approaches overprovisioning static buffers, delivering 20-30% effective capacity uplift through intelligent utilization rather than hardware expansion.
Funding coincides with AI infrastructure’s power inflection where global data center demand forecasts exceed 100GW new capacity annually against grid expansion lagging at 5GW. Nvidia’s Blackwell platform doubles power density versus Hopper while inference scaling laws demand continuous cluster expansion—circumstances amplifying Niv-AI’s value proposition for operators facing $1M+/day curtailment risk.
Glilot Capital’s Arik Kleinstein describes Niv-AI as “foundational control plane for data center power,” emphasizing active workload orchestration over passive monitoring. The investor frames instantaneous capacity as AI factories’ primary constraint, positioning Niv-AI to capture structural economics as hyperscalers allocate $200B+ CapEx constrained by electrical delivery.
Niv-AI navigates fragmented power optimization addressing distinct layers: hardware mitigation (Eaton UPS systems, battery storage), software scheduling (Kubernetes resource quotas), and grid-level forecasting (gridX, AutoGrid). Differentiation emerges through workload-native intelligence understanding AI-specific consumption signatures—transformer memory bandwidth spikes, attention computation surges—versus generic container orchestration lacking electrical domain expertise.
The approach parallels financial risk systems modeling millisecond market microstructure rather than daily aggregates, capturing transient dynamics determining actual headroom. Sensor fusion integrates facility metering, per-rack PDUs, GPU telemetry, and grid signaling (frequency containment reserves), creating comprehensive optimization domain absent from CPU-centric schedulers.
Production deployments target hyperscale operators managing 100MW+ facilities where optimization compounds to $50M+ annual savings per site. Sensor arrays retrofit existing infrastructure without downtime, AI models adapt continuously to workload evolution (mixture-of-experts scaling, quantization strategies), delivering compounding efficiency gains. Revenue model combines SaaS optimization ($0.01/kWh managed) with performance guarantees (capacity uplift KPIs), aligning economics with measurable P&L impact.
Team expansion targets electrical engineers modeling grid interactions, ML researchers specializing spatiotemporal forecasting, and hyperscaler relationship managers navigating procurement cycles. Six-month product timeline emphasizes safe integration preserving uptime SLAs critical for AI factories where seconds of downtime cascade multimillion losses.
Niv-AI catalyzes data center economics where power becomes primary constraint supplanting GPU availability. Current 30% stranded capacity represents $50B+ annual waste across global footprint—quantifiable inefficiency ripe for intelligent extraction. Platform positions at convergence of AI scaling laws demanding 10x annual compute growth against grid physics constraining instantaneous delivery.
Strategic implications ripple through infrastructure stack: NeoLab AI operations platforms gain power-aware scheduling; Gradient Ventures portfolio companies optimize inference economics; Halcyon energy intelligence surfaces optimal site selection factoring grid constraints. The $12 million deployment arrives precisely when hyperscalers confront physical limits constraining trillion-dollar intelligence ambitions.