AI

David Driggers, Cirrascale Chief Technology Officer-Interview Series

David Driggers is the chief technology officer of CirraScale Cloud Services. This is the leading provider of deep learning infrastructure solutions. Under the guidance of integrity, agility and customer focus, Cirrascale provides innovative, cloud -based infrastructure, that is, service (IaaS) solution. Cirrascale cooperates with leaders of AI ecosystems such as Red Hat and Wekai, to ensure seamless access to advanced tools, enabling customers to promote advanced learning progress while maintaining predictable costs.

Cirrascale is the only GPUAAS provider cooperated with NVIDIA, AMD, CEREBRAS and Qualcomm and other major semiconductor companies. How can this unique positioning benefit your customers in terms of performance and scalability?

As the industry develops from training models to deployment of these models, it is called reasoning, and no scale is suitable for everyone. Depending on the size and delay requirements of the model, different accelerators provide possible different values. Answer, the cost of each token advantage or the performance of each watt will affect the cost and user experience. Because the inference is used for production, these functions are critical.

What makes Cirrascale’s AI Innovation Cloud different from other GPUAAS providers to support AI and deep learning workflows?

Cirrascale’s AI Innovation Cloud enables users to try new technologies in a safe, assisted and fully supported manner that these technologies are not available in any other cloud. This can not only help cloud technical decisions, but also help potential on -site purchases.

How can the Cirrascale platform ensure that the seamless integration of startups and enterprises with different AI acceleration needs?

Cirrascale adopts a solution method for our cloud. This means that for the two startups and enterprises, we both provide a cross-key solution including developers and Infra-OP. When we call it a product that is not shared or virtualized, it is called Bare-Metal, Cirrascale fully configures all aspects of the product, including a complete configuration server, network, storage, security, and user access requirements, and then delivered the service to us to us Client. Our customers can start using the service immediately without having to configure everything.

AI within the scope of the enterprise adopts obstacles such as data quality, infrastructure restrictions and high costs. How can Western leaders respond to these challenges for companies that expand the AI ​​plan?

Although Cirrascale does not provide data quality type services, we do cooperate with companies that can help data problems. In terms of infrastructure and cost, Cirrascale can tailor solutions for customer specific needs, which brings better overall performance and related costs for customer needs.

With the progress of Google in Willow and AI models (Gemini 2.0), how do you see the landscape of corporate AI in the near future?

For most people, due to the lack of programmers and ready -made programs that can be used, quantum computing is still the best time with most people. Gemini 2.0 and other large -scale products (such as GPT4 and Claude) will definitely obtain some absorption from corporate customers, but it is not yet ready for the corporate market to make their data with third parties, especially possible people trust in people who trust them, especially possible people. Data use the above data to train their models.

Finding the appropriate power, the balance of price and performance is essential for extended AI solutions. What are your best suggestions for this balance company?

Testing, testing, testing. For the company, it is important to test its model on different platforms. Different production and development, cost issues in production. Training may be a training, but inferring is forever. If you can meet the performance requirements at a lower cost, these savings will be reduced to the lowest point, and may even make the solution feasible. The deployment of large models is often too expensive to be practical. End users should also seek a company that can help this test, because ML engineers can often help deploy data scientists who create and create models.

How does Cirrascale adapt to its solution to meet the growing demand for generating AI applications, such as LLM and image generation models?

Cirrascale provides the most extensive AI accelerator, and the diffusion dimensions and range of the LLMS and Genai models (such as multi -mode scenarios), while the batch and real -time relative time, it is indeed a horse’s horse.

Can you provide examples to explain how CirraScale helps companies overcome the delay and data transmission bottlenecks in the AI ​​workflow?

Cirrascale has many data centers in multiple regions and does not connect network connections as profit centers. This allows our users to “right -click” the connection required for mobile data and use more positions (if delay is a key function) to use a position. Similarly, by analyzing the actual workload, Cirrascale can assist in balanced delay, performance and cost to provide the best value after meeting performance requirements.

What are you most excited is the emerging trend in AI hardware or infrastructure. How do Cirrascale prepare them?

We are very excited about the new processor of general processors based on GPUs. These processors are lucky enough to be suitable for training, but they are not optimized for the same inference use cases that are inherently inherently inherent as training.

Thank you for your excellent interview, and hope that more readers should visit the Cirrascale Cloud Services.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button