Hybrid Cloud – Holodeck2020-12-10T10:34:53-08:00

Holodeck – Hybrid Cloud

See a Demo
Hybrid Cloud: 6 Implementation Fundamentals

Fastest Path
to Hybrid

Run Existing Workflows
– No Retooling

Low Cost
Data Transfers

On-Premise & Cloud

& Teardown

Scale Out

Fastest Path to Hybrid Cloud

Automatically Run your Existing Workflows in the Cloud

With Holodeck, you can be ready for hybrid cloud fast. Your on-premise workflows run identically in the cloud, without any retooling. Holodeck automatically and dynamically determines the exact workflow data (tools, environment & project data) — needed by any job for the cloud. It then projects that data into the cloud.

Hybrid Cloud meets the tremendous demand for high performance, elastic compute across a broad number of applications, such as chip design & verification with Electronic Design Automation (EDA) tools, 3D animation, Biomedical, Mechanical CAD, Aerospace, Video, Complex software products, and Mixed use cases. This is especially true for I/O intensive jobs.

Holodeck automatically & dynamically determines the exact workflow data needed by a job.

With Hybrid cloud, your application workflows continue to run with on-premise compute farms; you then can temporarily burst into a public or private cloud when your demand for computing capacity spikes. Hybrid cloud is also a great solution for running low cost remote sites in the cloud for low latency and high performance.

You can achieve high performance, elastic compute and immediately reduce costs with Holodeck.  It enables the absolute 2-way minimum data transfer  between on-premise and the cloud. It also uses a cache fabric to reduce your on-premise and cloud storage requirements.

Amazon, Google and Azure cloud services work securely and transparently. They can be scheduled in the same way as your local on-premise queues in your compute farm grid, enabling you to run overnight regressions in the cloud or on-premise with no additional effort.

Challenges with Current Hybrid Cloud Deployment Methods

Workflow Replication Can Create Functionality Risk

Cloud & on-premise infrastructures are different. The cloud typically uses block storage, where the data is accessed by only one host at a time, while  complex tool workflows rely on an NFS filer to ensure that the same coherent data is accessible across thousands of nodes. However, NFS filers are not available as a native instance in the cloud.

Companies typically try to attain hybrid workflows today by setting up the cloud environment, copying the data, and then running jobs. However, the interdependencies and complexity makes this time consuming and error prone.

Data Duplication is Slow & Expensive

The typical method of replicating on-premise workflows & data in the cloud has major drawbacks. First, copying your design data into the cloud, and then continually keeping it updated causes slowdowns due to latency. So, you lose a lot of the fast compute benefits of hybrid cloud.

It is expensive to transfer the data from the cloud back to on-premise — the costs can add up quickly. Additionally, when you have two ‘copies’ of your environment and data, you end up paying for data storage in multiple domains, as you must continually maintain your on-premise data set plus an additional copy for each cloud projection. Once you have all the free data transferred into the cloud, you pay for it by the gigabytes/second, as well as pay for your filer storage.

You get a performance hit from building your own “scale-up” NFS solution in the cloud using standard Linux servers, as they only provide mid- to low-end performance. True cloud architectures are built for “scale-out”.

Advantages of IC Manage Holodeck for Hybrid Cloud



Fastest Path
to Hybrid

With IC Manage Holodeck, the process to extend your existing infrastructure into the cloud — or to remote sites — is very quick. There are 3 short steps:

1. You identify your NFS filer mount points and namespaces via a series of filer snapshots, and then pass the information to Holodeck at your on-premise location.

2. Holodeck then replicates your on-premise filer mount points and creates a virtual representation on a remote compute node, using only a bare operating system of your choosing.

3. As your jobs run, Holodeck transfers the file extent data into the cloud (or remote site) database and caches it locally to the peer nodes.

Once the cache requests have been fulfilled, no further data transfers are required between on-premise and cloud — your cloud or remote site runs in a decoupled manner from your on-premise environment.


Your Existing Tools.
Your Workflow.
Your Cloud Vendor.

IC Manage Holodeck preserves your existing workflows — they work precisely the same in the cloud and on-premise, without any retooling.

  • You do not need to build or use special tools and environments that run natively in the cloud.
  • Holodeck can flexibly support mixed vendor environments.

Holodeck automatically and dynamically determines the exact workflow data (tools, environment & design data) needed by a job. It’s able to do this because of the fundamental architecture we’ve pioneered — we separate metadata from data.

IC Manage Holodeck is also cloud vendor agnostic. You get to pick your preferred cloud vendor, and your Amazon/Google/Azure cloud services work transparently.


Low Volume
Data Transfers

Holodeck delivers the absolute minimum data to transfer in and out of the cloud.  It has the fastest data transfer up to the cloud — it’s byte-level extraction  delivers low latency bursting.

It’s file delta write-backs then slash expensive cloud data download fees. Holodeck clones the data, generating virtual copies.

  • Engineers can then create fully isolated independent work areas of massive data sets, while only using incremental storage space for asynchronous write-backs to your on-premise storage.
  • Holodeck has ultra-fine data transfer granularity. It saves only the delta changes back to your NFS — it never copies duplicate data.

For example, if you create any number of parallel workspaces, the modified files are tracked as deltas of the original. The data transfer back to the on-premise storage is optimized, and the deltas can then be expanded into discrete files as directed by your database. Holodeck also even enables selective writeback control to prevent users from inadvertently pushing unnecessary data back across the wires.


Reduces Storage Costs:
AND Cloud

Holodeck dramatically reduces storage requirements, both for cloud storage and your on-premise filer.

Cloud storage reduction: The shared peer caches eliminate duplicate storage because the copies are now in the cache fabric rather than in your cloud back-end storage or your on-premise storage.  This eliminates the cost of paying for data copies in the cloud, including those that are never used —  a number that would continue to grow over time.

This same feature also allows you to decouple your on-premise compute nodes from your existing NFS filer storage bottlenecks.

NFS Filer disk storage reduction: Holodeck dramatically reduces your expensive NFS filer storage disk space, due to its ability to separate common file content from the descriptive file metadata containing file names, sizes, owners, groups, masks, along with times of creation, access and modification. You eliminate unnecessary duplication of physical copies that contain both the common data and the meta data in every copy.

The workspace versions share the same set of common files. Only the metadata changes and per workspace changes require additional storage, and once those changes are checked into the filer, the space is freed up on the Holodeck’s P2P caching networks.

Learn more >>


Fast Configuration.
Fast Teardown.

Holodeck avoids lengthy post OS configuration; it brings up a “naked” cluster when you run a job from scratch — all it needs is the right OS and security configuration. You avoid the full, lengthy software provisioning procedure since the binaries and data for the job are treated identically and delivered at file extent granularity.

Holodeck also enables fast teardown of your environment. It only requires one database on a single node with locally connected block storage to serve as a warm cache for subsequent bursting operations.


Deploying Holodeck: Scale Out Quickly Or Gradually

Start with Only What You Need Now

Holodeck delivers high-performance computing I/O by utilizing a peer-to-peer cache fabric that is scaled out simply by adding additional peers.

It’s important for many development teams to begin with initial hybrid cloud projects, and then increase their cloud workflows over time.

  • Holodeck lets you start with small hybrid cloud pilot projects
  • Its architecture enables company to scale-out as fast or gradually as they want.
  • It consists of one or more databases that can be provisioned on block storage; the peer nodes use only small amounts of temporary storage using high performance NVMe devices, such as the widely available AWS i3 instance types.

Holodeck’s peer-to-peer sharing model allows any node to read or write data generated on any other node.  Because this sharing model has the same semantics as NFS, it ensures 100% compatibility for all your existing workflows.

Then Scale Out with High Performance

Following your initial hybrid cloud project, your design and verification teams can incrementally move select workloads into the cloud, and then horizontally scale out as they evolve.

You simply run additional Holodeck peer nodes as part of your cloud auto-scale groups, ensuring consistent low latency and high bandwidth parallel performance at all times and eliminating traditional storage bottlenecks. You can expand to 1000s of nodes as your needs evolve.

Holodeck behaves like your NFS filer, with the same high resiliency, but with full horizontal scaling.  It runs RAFT distributed consensus between databases, so that if one unit fails, the coherency is still maintained.

The scale out architecture allows additional peers to be added in the cloud or at your remote site into your infrastructure, giving you a fully elastic cache fabric to go with your elastic compute. Because it is file based, Holodeck works with all applications in any domain.