We introduce PeerCache scale-out I/O that delivers extreme file performance.
There is a standard Intel strategy to blur the lines between file I/O and memory I/O. So, we use NVME, but instead of building file systems out of NVME, we make your local filesystem look like a cache.
And we provide a shared storage model for that cache using peer-to-peer. This enables high-performance compute in the cloud for EDA tools, and by using peer-to-peer between multiple readers and writers, we can now do everything in parallel.
Most filesystems today are serialized. There are alternative parallel filesystems that have been used in HPC compute for a while but they’ve been optimized for a different kinds of workload, which is typically high concurrent I/O — which is not the way EDA works.
So what we do is that we use flash as a cache, not as a storage, and it slashes your costs because using small 1 terabyte or 2 terabytes of NVME per node, you can now get extreme file performance.