The rapid arrival of real-time gaming, virtual reality and metaverse applications are changing the way network, compute memory and interconnect I/O interact. Arista Extensible Operating System (EOS) provides all the necessary tools to achieve a premium lossless, high bandwidth, low latency, AI network. We use IP/Ethernet switches for GPU interconnects driving AI/ML workloads. Exponential growth in AI applications requires standardized transport such as Ethernet to build a power efficient interconnect and overcome administrative, scale-out complexities of traditional approaches. Building an IP/Ethernet architecture with high-performance Arista switches maximizes application performance while optimizing network operations. Modern AI applications need a high-bandwidth, lossless, low-latency, scalable, multi-tenant network that can interconnect hundreds and thousands of GPUs at speeds of 100Gbps, 400Gbps, 800Gbps and beyond. With support for Data Center Quantized Congestion Notification (DCQCN), Priority Quality of Service (QoS) and adjustable buffer allocation schemes. Arista provides all the necessary tools to achieve a premium lossless, high bandwidth, low latency network.