More efficient data center computing

UTA computer science team uses NSF grant to improve computing speed at data centers

A University of Texas at Arlington computer science team is using a $600,000 National Science Foundation grant to develop algorithms for microservice-based data center services that allow for quicker, more efficient use of data center computing resources.

In addition, the grant will boost curriculum while supporting several doctoral students working on the project and providing research experiences for undergraduate and underrepresented students.

Hong Jiang, chair and Wendell H. Nedderman Endowed Professor of the UTA Computer Science and Engineering Department, is leading the project titled, “SHF: Small: A Distributed Scalable End-to-End Tail Latency SLO Guaranteed Resource Management Framework for Microservices.” Microservices is a type of computer software construction that Jiang compared to building something with Legos.

Each individual microservice can be viewed as a Lego part, and one can enable a specific

 

computing service by simply putting together a set of Lego parts or microservices, rather than having to build the service from scratch,” Jiang said. “When you go to scale a computing service built with microservices, you can just go to the microservices that pose bottlenecks and scale them without scaling other parts of the service. However, as different computing services may share microservices, how to provide performance guarantee for individual computing services becomes a critical challenge.”

An additional challenge Jiang is trying to address is how to utilize the microservice resources efficiently.

“The ability to scale the services at the microservice level gives you the best performance and response time, but you often are not utilizing the entire power of the system,” Jiang said. “It’s like having a car that has 600 horsepower but only utilizing 300 horsepower from the engine. It’s costly and not needed.”

Zhijun Wang, a computer science research associate, and Hao Che, a computer science professor, are members of the research team. Wang said that the solution the team has proposed should address those challenges.

“The microservices already are there,” Wang said. “What we’re trying to do is to provide performance guarantee for individual computing services while maximizing the resource utilization.”

Che said such maximization is a tremendous economic benefit to businesses.

“For every additional 100 milliseconds that a customer is waiting on a business’ website, the chance that the customer could leave increases by a few percent,” Che said. “It’s imperative that the questions of customers browsing your website are answered quickly and accurately. It means real monetary loss if they travel somewhere else.”

A few takeaways:

  • Like many cloud companies, Microsoft is aiming to build edge and cloud software and services as if they were one continuous computing fabric.
  • Microsoft is looking to support the whole commercial IoT gamut, from "tiny edge" (meaning microcontrollers/sensors/fixed purpose devices); to "light edge" Windows IoT Enterprise, Windows Server IoT and industrial equipment, robots and kiosks; to "heavy edge," meaning hybrid servers, hyper-converged infrastructure (Azure HCI) and Azure Stack.
  • More and more IoT solutions are starting to look like small datacenters, and the boundaries between devices, servers and virtual machines are blurring.
  • In addition to better Azure integration, Microsoft is looking to bring cross-service and device security to its IoT offerings (which I'm assuming means Azure Active Directory integration, among other things).
  • Cloud-native programming models and Kubernetes/container orchestration are key to its IoT and edge strategies.

Microsoft also played up heavily at Build this year the idea of a "hybrid loop." The concept: Hybrid apps will be able to allocate resources locally on PCs and in the cloud dynamically. The cloud becomes an additional computing resource for these kinds of applications, and applications -- especially AI/ML-enabled ones -- can opt to do processing locally on an edge device or in the cloud (or both). This concept definitely relies on IoT and edge devices and services becoming more deeply integrated with Azure.

I'm thinking we'll hear more about Microsoft's updated IoT and edge-computing vision at its upcoming Ignite 2022 IT pro conference in mid-October, if not before.

 

Link