South Korea's DeepX is preparing a public share offering. The company designs low-power AI chips optimized for on-device inference — the class of workload that runs on a robot, a car, an IoT sensor, or a phone rather than in a hyperscaler rack.

The edge-AI thesis has been unfashionable for two years as cloud training compute dominated the capital flows. DeepX's IPO timing suggests the cycle is turning. As models get smaller and inference cost becomes the dominant line item in enterprise AI budgets, the economic incentive to run workloads locally grows, and specialized silicon beats general-purpose accelerators on performance per watt.

Pricing and roadshow reception will tell us whether public market investors are willing to underwrite a long-horizon edge-AI story. If DeepX prices well, expect a wave of similar filings from the wider Asia-Pacific semiconductor ecosystem within the next two quarters.