AWS Announces Nine New Compute and Networking Innovations for Amazon EC2AWS宣布亚马逊EC2的九项新的计算和网络创新

AWS Announces Nine New Compute and Networking Innovations for Amazon EC2AWS宣布亚马逊EC2的九项新的计算和网络创新
2019年12月04日 04:34 腾讯自选股综合

感知中国经济的真实温度,见证逐梦时代的前行脚步。谁能代表2019年度商业最强驱动力?点击投票,评选你心中的“2019十大经济年度人物”。【我要投票

原标题:AWS Announces Nine New Compute and Networking Innovations for Amazon EC2AWS宣布亚马逊EC2的九项新的计算和网络创新 来源:腾讯自选股综合

New Arm-based versions of Amazon EC2 M, R, and C instance families, powered by new AWS-designed Graviton2 processors, delivering up to 40% improved price/performance over comparable x86-based instances

New Amazon EC2 machine learning instance, featuring AWS-designed Inferentia chips, offer high performance and the lowest cost machine learning inference in the cloud

AWS Compute Optimizer machine learning-powered instance recommendation engine makes it easy to choose the right compute resources

AWS Transit Gateway now supports native IP multicast protocol, integrates with SD-WAN partners to enable easier and faster connectivity between customers’ branch offices and AWS, and provides a new capability to make it much easier to manage and monitor their global network from a single pane of glass

VPC Ingress Routing allows customers to easily deploy third-party appliances in their VPC for specialized networking and security functions

SEATTLE--(BUSINESS WIRE)-- Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced nine new Amazon Elastic Compute Cloud (EC2) innovations. AWS already has more compute and networking capabilities than any other cloud provider, including the most powerful GPU instances, the fastest processors, and the only cloud with 100 Gbps connectivity for standard instances. Today, AWS added to its industry-leading compute and networking innovations with new Arm-based instances (M6g, C6g, R6g) powered by AWS-designed processors in Graviton2, machine learning inference instances (Inf1) powered by AWS-designed Inferentia chips, a new Amazon EC2 feature that uses machine learning to cost and performance optimize Amazon EC2 usage, and networking enhancements that make it easier for customers to scale, secure, and manage their workloads in AWS.

New Arm-based versions of Amazon EC2 M, R, and C instance families, powered by new AWS-designed AWS Graviton2 processors, deliver up to 40% improved price/performance over comparable x86-based instances

Since their introduction a year ago, Arm-based Amazon EC2 A1 instances (powered by AWS's first version of Graviton chips) have provided customers with significant cost savings by running scale-out workloads like containerized microservices and web tier applications. Based on the cost savings, combined with increasing and significant support for Arm from a broader ecosystem of operating system vendors (OSVs) and independent software vendors (ISVs), customers now want to be able to run more demanding workloads with varying characteristics on AWS Graviton-based instances, including compute-heavy data analytics and memory intensive data stores. These diverse workloads require enhanced capabilities beyond those supported by A1 instances, such as faster processing, higher memory capacity, increased networking bandwidth, and larger instance sizes.

The new Arm-based versions of Amazon EC2 M, R, and C instance families, powered by new AWS-designed Graviton2 processors, deliver up to 40% better price/performance than current x86 processor-based M5, R5, and C5 instances for a broad spectrum of workloads, including high performance computing, machine learning, application servers, video encoding, microservices, open source databases, and in-memory caches. These new Arm-based instances are powered by the AWS Nitro System, a collection of custom AWS hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage, to reduce customer spend and effort when using AWS. AWS Graviton2 processors introduce several new performance optimizations versus the first generation. AWS Graviton2 processors use 64-bit Arm Neoverse cores and custom silicon designed by AWS, built using advanced 7 nanometer manufacturing technology. Optimized for cloud native applications, AWS Graviton2 processors provide 2x faster floating point performance per core for scientific and high performance computing workloads, optimized instructions for faster machine learning inference, and custom hardware acceleration for compression workloads. AWS Graviton2 processors also offer always-on fully encrypted DDR4 memory and provide 50% faster per core encryption performance to further enhance security. AWS Graviton2 powered instances provide up to 64 vCPUs, 25 Gbps of enhanced networking, and 18 Gbps of EBS bandwidth. Customers can also choose NVMe SSD local instance storage variant (C6gd, M6gd, and R6gd), or bare metal options for all of the new instance types. The new instance types are supported by several open source software distributions (Amazon Linux 2, Ubuntu, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Fedora, Debian, FreeBSD, as well as the Amazon Corretto distribution of OpenJDK), container services (Docker Desktop, Amazon ECS, Amazon EKS), agents (Amazon CloudWatch, AWS Systems Manager, Amazon Inspector), and developer tools (AWS Code Suite, Jenkins). Already, AWS services like Amazon Elastic Load Balancing, Amazon ElastiCache, and Amazon Elastic Map Reduce have tested the AWS Graviton2 instances, found they deliver superior price/performance, and plan to move them into production in 2020. M6g instances are available today in preview. C6g, C6gd, M6gd, R6g and R6gd instances will be available in the coming months. To learn more about AWS Graviton2 powered instances visit: https://aws.amazon.com/ec2/graviton.

Amazon EC2 Inf1 instances powered by AWS Inferentia chips deliver high performance and the lowest cost machine learning inference in the cloud

Customers across a diverse set of industries are turning to machine learning to address common use cases (e.g. personalized shopping recommendations, fraud detection in financial transactions, increasing customer engagement with chatbots, etc.). Many of these customers are evolving their use of machine learning from running experiments to scaling up production machine learning workloads where performance and efficiency really matter. Customers want high performance for their machine learning applications in order to deliver the best possible end user experience. While training rightfully receives a lot of attention, inference actually accounts for the majority of complexity and the cost (for every dollar spent on training, up to nine are spent on inference) of running machine learning in production, which can limit much broader usage and stall customer innovation. Additionally, several real-time machine learning applications are sensitive to how quickly an inference is executed (latency), while other batch workloads need to be optimized for how many inferences can be processed per second (throughput), requiring customers to choose between processors optimized for latency or throughput.

With Amazon EC2 Inf1 instances, customers receive high performance and the lowest cost for machine learning inference in the cloud, and no longer need to make the sub-optimal tradeoff between optimizing for latency or throughput when running large machine learning models in production. Amazon EC2 Inf1 instances feature AWS Inferentia, a high performance machine learning inference chip designed by AWS. AWS Inferentia delivers very high throughput, low latency, and sustained performance for extremely cost-effective real-time and batch inference applications. AWS Inferentia provides 128 Tera operations per second (TOPS or trillions of operations per second) per chip and up to two thousand TOPS per Amazon EC2 Inf1 instance for multiple frameworks (including TensorFlow, PyTorch, and Apache MXNet), and multiple data types (including INT-8 and mixed precision FP-16 and bfloat16). Amazon EC2 Inf1 instances are built on the AWS Nitro System, a collection of custom AWS hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage, to reduce customer spend and effort when using AWS. Amazon EC2 Inf1 instances deliver low inference latency, up to 3x higher inference throughput, and up to 40% lower cost-per-inference than the Amazon EC2 G4 instance family, which was already the lowest cost instance for machine learning inference available in the cloud. Using Amazon EC2 Inf1 instances, customers can run large scale machine learning inference to perform tasks like image recognition, speech recognition, natural language processing, personalization, and fraud detection at the lowest cost in the cloud. Amazon EC2 Inf1 instances can be deployed using AWS Deep Learning AMIs and will be available via managed services such as Amazon SageMaker, Amazon Elastic Containers Service (ECS), and Amazon Elastic Kubernetes Service (EKS). To get started with Amazon EC2 Inf1 instances visit: https://aws.amazon.com/ec2/instance-types/inf1.

AWS Compute Optimizer uses a machine learning-powered instance recommendation engine to make it easy to choose the right compute resources

Choosing the right compute resources for a workload is an important task. Over-provisioning resources can lead to unnecessary cost, and under-provisioning can lead to poor performance. Up until today, to optimize use of Amazon EC2 resources, customers have allocated systems engineers to analyze resource utilization and performance data, or invested in resources to run application simulations on a variety of workloads. And, this effort can add up over time because the resource selection process must be repeated as applications and usage patterns change, new applications move into production, and new hardware platforms become available. As a result, customers sometimes leave their resources inefficiently-sized, pay for expensive third-party solutions, or build optimization solutions themselves that manage their Amazon EC2 usage.

AWS Compute Optimizer delivers intuitive and easily actionable AWS resource recommendations so customers can identify optimal Amazon EC2 instance types, including those that are a part of Auto Scaling groups, for their workloads, without requiring specialized knowledge or investing substantial time and money. AWS Compute Optimizer analyzes the configuration and resource utilization of a workload to identify dozens of defining characteristics (e.g. whether a workload is CPU-intensive, or if it exhibits a daily pattern). AWS Compute Optimizer uses machine learning algorithms that AWS has built to analyze these characteristics and identify the hardware resource headroom required by the workload. Then, AWS Compute Optimizer infers how the workload would perform on various Amazon EC2 instances, and makes recommendations for the optimal AWS compute resources for that specific workload. Customers can activate AWS Compute Optimizer with a few clicks in the AWS Management Console. Once activated, AWS Compute Optimizer immediately starts analyzing running AWS resources, observing their configurations and Amazon CloudWatch metrics history, and generating recommendations based upon their characteristics. To get started with AWS Compute Optimizer, visit http://aws.amazon.com/compute-optimizer.

New AWS Transit Gateway capabilities add native support for the IP multicast protocol, and make it easier to create and monitor a scalable, global network

AWS Transit Gateway is a network hub that enables customers to easily scale and manage connectivity between Amazon Virtual Private Clouds (VPCs) and customers' on-premises datacenters, as well as between thousands of VPCs within an AWS region. Today, AWS added five new networking capabilities for AWS Transit Gateway that simplify the management of global private networks and enable multicast workloads in the cloud:Transit Gateway Multicastis the first native multicast support in the public cloud. Customers use multicast routing when they need to quickly distribute copies of the same data from a single source to multiple subscribers (e.g. digital media or market data from a stock exchange). Multicast reduces bandwidth across the network and ensures that end subscribers get the information quickly, at roughly the same time. Up until now, there has been no easy way for customers to host multicast applications in the cloud. Most multicast solutions are hardware-based, so customers were forced to buy and maintain inflexible and difficult-to-scale hardware on premises. In addition, they had to maintain this hardware in a separate on-premises network, which added complexity and cost. Some have tried to implement a multicast solution in the cloud without native support, but those workarounds were often hard to support and scale. With Transit Gateway Multicast, customers can now build multicast applications in the cloud that can scale up and down based on demand, without having to buy and maintain custom hardware to support their peak loads.Transit Gateway Inter-Region Peeringmakes it easy to build global networks by connecting Transit Gateways across multiple AWS regions. Before Transit Gateway Inter-Region Peering, the only way resources in a VPC in one region could communicate with resources in a VPC in a different region was through peer-to-peer connections between those VPCs, which are difficult to scale and manage if you need many of them. Inter-Region Peering addresses this and makes it easy to create secure and private global networks by peering AWS Transit Gateways between different AWS regions. All traffic between regions that uses Inter-Region Peering is carried by the AWS backbone, so it never traverses the public Internet, and it’s also anonymized and encrypted. This provides the highest level of security, while ensuring that a customer’s traffic is always taking the optimal path between regions, delivering a higher and more consistent performance versus the public Internet.Accelerated Site to Site VPNis a new capability that provides a more secure, predictable and performant VPN experience for customers who need secure access from a branch location to services in AWS. Many customers have branch offices in smaller cities or international locations that are physically far away from the applications that are running in AWS. They use a VPN connection over the Internet to connect their branch offices to services in AWS, but their connection will typically traverse the Internet. This path involves hops through multiple public networks, which can lead to lower reliability, unpredictable performance, and more exposure to attempted security attacks. With Accelerated VPN, customers can connect their branch locations to an AWS Transit Gateway through the closest AWS edge location, reducing the number of network hops involved and optimizing the network performance for lower latency and higher consistency.Integration with popular Software Defined Wide Area Network (SD-WAN) vendorsis a new capability that lets customers easily integrate third party SD-WAN solutions from Cisco, Aruba, Silver Peak, and Aviatrix with AWS. Today, many customers use SD-WAN solutions to manage the local network in their branch locations, and also to connect those locations to their datacenters or cloud networks in AWS. To set up these branch devices and network connections, customers had to manually provision and configure their branch and AWS resources, which can take significant time and effort. While Accelerated Site to Site VPN provides a secure, predictable, and performant connection from branch locations, this SD-WAN integration goes further by allowing customers to automatically provision, configure and connect to AWS using popular third party SD-WAN solutions. All this makes it easier for customers to set up and manage connectivity to their branch offices.Transit GatewayNetwork Manageris a new capability that simplifies the monitoring of global networking resources that are both in the cloud and on-premises, by pulling everything together in a single pane of glass. While customers are shifting more and more applications to run on their cloud network, they still maintain an on-premises network, and because of this they need to monitor both environments so they can quickly respond to any issues across either environment. Today, to manage and operate these networks, customers need to use multiple tools and consoles, between AWS and other on-premises providers. This complexity can lead to management errors, or delay the time it takes to identify and solve a problem. Network Manager simplifies the creation and operation of global private networks by providing a unified dashboard to visualize and monitor the health of a customer’s VPCs, Transit Gateways, Direct Connect, and VPN connections to branch locations and on-premises networks.

Amazon VPC Ingress Routing helps customers easily integrate 3rd party network and security appliances into their network

When enterprises migrate to AWS, they often want to bring the network or security appliance that they have used for years in their on-premises data centers to the cloud as a virtual appliance. While the AWS Marketplace today offers the broadest selection of networking and security virtual appliances in a cloud marketplace, customers lacked the flexibility to easily route traffic entering an Amazon VPC through these appliances. With Amazon VPC Ingress Routing, customers can now associate route tables with the Internet Gateway (IGW) and Virtual Private Gateway (VGW) and define routes to redirect incoming and outgoing VPC traffic to third-party appliances. This makes it easier for customers to integrate virtual appliances that meet their networking and security needs in the routing path of inbound and outbound traffic. This feature can be provisioned with a few clicks or API calls, making it easy for customers to deploy networking and security appliances in their network without creating complex workarounds that don’t scale. Partners with virtual appliances that currently support Amazon VPC Ingress Routing include 128 Technology, Aviatrix, Barracuda, Check Point, Cisco, Citrix Systems, FireEye, Fortinet, Forcepoint, HashiCorp, IBM Security, Lastline, NETSCOUT Systems, Palo Alto Networks, ShieldX Networks, Sophos, Trend Micro, Valtix, Vectra, and Versa Networks.

“Customers know that AWS already offers more breadth and depth of capabilities in Amazon EC2 than any other cloud provider. They also tell us that this breadth and depth provides great benefit because they are running more and more diverse workloads in the cloud, each with its own characteristics and needs,” said Matt Garman, Vice President, Compute Services, AWS. “As they look to bring more and more diverse workloads to the cloud, AWS continues to expand our offerings in ways that offer them better performance and lower prices. Today’s new compute and networking capabilities show that AWS is committed to innovating across Amazon EC2 on behalf of our customers’ diverse needs.”

Netflix is the world's leading internet entertainment service with 158 million memberships in 190 countries enjoying TV series, documentaries, and feature films across a wide variety of genres and languages. “We use Amazon EC2 M instance types for a number of workloads inclusive of our streaming, encoding, data processing, and monitoring applications,” said Ed Hunter, Director of Performance and operating systems at Netflix. “We tested the new M6g instances using industry standard LMbench and certain Java benchmarks and saw up to 50% improvement over M5 instances. We’re excited about the introduction of AWS Graviton2-based Amazon EC2 instances.”

Nielsen is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. “Our OpenJDK based Java application is used to collect digital data, process incoming web requests, and redirect requests based on business needs. The application is I/O intensive and scaling out in a cost-effective manner is a key requirement,” said Chris Nicotra, SVP Digital, at Nielsen. “We seamlessly transferred this Java application to Amazon EC2 A1 instances powered by the AWS Graviton processor. We’ve since tested the new Graviton2-based M6g instances and it was able to handle twice the load of an A1. We look forward to running more workloads on the new Graviton2-based instances.”

Datadog is the monitoring and analytics platform for developers, operations, and business users in the cloud age. “We’re happy to see AWS continuing to invest in the Graviton processor and the significant momentum behind the Arm ecosystem,” said Ilan Rabinovitch, VP of Product and Community, at Datadog. “We’re excited to announce that the Datadog Agent for Graviton / Arm is now generally available. Customers can now easily monitor the performance and availability of these Graviton-based Amazon EC2 instances in Datadog alongside the rest of their infrastructure.”

Western Digital, a leader in data infrastructure, drives the innovation needed to help customers capture, preserve, access, and transform an ever-increasing diversity of data. “At Western Digital, we use machine learning with digital imaging for quality inspection in our manufacturing processes. In manufacturing and elsewhere, machine learning applications are growing in complexity as we move from reactive to proactive detection,” said Steve Philpott, Chief Information Officer, Western Digital. “Currently, we are limited in how many quality-inspection images can be processed on current CPU-based solutions. We look forward to using the AWS EC2 Inf1 Inferentia instances and expect to increase Western Digital’s image-processing throughput for this purpose, while simultaneously reducing the processing times to an order of milliseconds. Based on initial analysis, we expect to be able to run our ML-based detection models multiple times every hour, significantly reducing occurrences of rare events and upping the bar on our best-in-class product quality and reliability.”

Over 100 million Alexa devices have been sold globally, and customers have also left over 400,000 5-star reviews for Echo devices on Amazon. “Amazon Alexa’s AI and ML-based intelligence, powered by Amazon Web Services, is available on more than 100 million devices today – and our promise to customers is that Alexa is always becoming smarter, more conversational, more proactive, and even more delightful,” said Tom Taylor, Senior Vice President, Amazon Alexa. “Delivering on that promise requires continuous improvements in response times and machine learning infrastructure costs, which is why we are excited to use Amazon EC2 Inf1 to lower inference latency and cost-per-inference on Alexa text-to-speech. With Amazon EC2 Inf1, we’ll be able to make the service even better for the tens of millions of customers who use Alexa each month.”

Nubank is one of the largest fintech companies in Latin America, providing a wide range of consumer financial products to over 15 million customers. “We develop simple, secure, and 100% digital solutions to help customers manage their money easily. In order to support millions of transactions on our platform, we run hundreds of applications with a diverse set of requirements on Amazon EC2. In order to keep our infrastructure optimized, we dedicate an engineering team to continuously monitor and analyze the utilization of our EC2 instances,” said Renan Capaverde, Director of Engineering at Nubank. “With AWS Compute Optimizer, we will have a single pane of glass to identify optimal EC2 instance types for our workloads across our AWS environment. The ability to visualize projected resource utilization on various instance types will allow us to make more informed decisions before we update our fleet.”

Japan Exchange Group (JPX) operates financial instruments exchange markets to provide market users with reliable venues for trading listed securities and derivatives instruments. “We are excited about the launch of the IP multicast feature on AWS Transit Gateway”, said Ryusuke Yokoyama, Senior Executive Officer & CIO, Japan Exchange Group, Inc. “It is our top priority as a market operator to improve the convenience and accessibility of the market participants in Japan. We use multicast extensively for market data distribution. Availability of multicast support in AWS will enable us to adopt cloud-native data distribution, improving the user experience while providing easier access for market participants.”

About Amazon Web Services

For 13 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 165 fully featured services for compute, storage, databases, networking, analytics, robotics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 69 Availability Zones (AZs) within 22 geographic regions, with announced plans for 13 more Availability Zones and four more AWS Regions in Indonesia, Italy, South Africa, and Spain . Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit amazon.com/about and follow @AmazonNews.

View source version on businesswire.com:https://www.businesswire.com/news/home/20191203005914/en/

Amazon.com, Inc. Media Hotline Amazon-pr@amazon.com www.amazon.com/pr

Source: Amazon Web Services, Inc. (AWS)

新的基于ARM的AmazonEC2M、R和C实例系列版本,由新的AWS设计的Graviton 2处理器驱动,比基于x86的类似实例提高了40%的价格/性能。

新的AmazonEC 2机器学习实例,以AWS设计的Inferentia芯片为特色,提供高性能和最低成本的云中机器学习推理。

AWS计算优化器机器学习驱动实例推荐引擎使选择正确的计算资源变得更加容易。

AWS传输网关现在支持本地ip多播协议,与sd-wan合作伙伴集成,使客户分支机构和aws之间更容易更快地连接,并提供了一种新的功能,使用户可以更容易地从单一的玻璃面板管理和监视他们的全球网络。

vpc侵入路由允许客户在vpc中轻松地部署第三方设备,以实现专门的网络和安全功能。

西雅图-(商业线路)-今天在AWS Re:发明,亚马逊网络服务公司。亚马逊(Amazon.com)公司(纳斯达克市场代码:AMZN)宣布了9项新的亚马逊弹性计算云(EC2)创新。AWS已经比任何其他云提供商都具有更多的计算和联网能力,包括最强大的GPU实例、最快的处理器以及唯一具有100 Gbps标准实例连接性的云。今天,AWS为其业界领先的计算机和网络创新增加了新的基于ARM的实例(m6g、c6g、r6g),这些实例由AWS设计的Graviton 2处理器驱动,机器学习推理实例(Inf1)由AWS设计的Inferentia芯片驱动,这是AmazonEC 2的一个新特性,它利用机器学习来优化AmazonEC 2的使用,以及网络增强功能,使客户更容易地扩展、安全和管理AWS中的工作负载。

新的基于ARM的AmazonEC2M、R和C实例系列,由新的AWS设计的AWS Graviton 2处理器驱动,与基于x86的类似实例相比,提供了40%的改进价格/性能。

自一年前推出以来,基于ARM的AmazonEC2a1实例(由AWS第一个版本的Graviton芯片驱动)通过运行容器化微服务和web层应用等大规模工作负载,为客户提供了显著的成本节约。基于节省的成本,再加上操作系统供应商(OSVs)和独立软件供应商(ISV)的更广泛的生态系统对ARM的越来越多和重要的支持,客户现在希望能够在基于AWS Graviton的实例上运行要求更高、特性各异的工作负载,包括计算量大的数据分析和内存密集型数据存储。这些不同的工作负载需要增强能力,而不是A1实例所支持的能力,例如更快的处理、更高的内存容量、更高的网络带宽和更大的实例大小。

新的基于ARM的AmazonEC2M、R和C实例系列由新的AWS设计的Graviton 2处理器驱动,与目前基于x86处理器的M5、R5和C5实例相比,其价格/性能提高了40%,适用于各种工作负载,包括高性能计算、机器学习、应用服务器、视频编码、微服务、开源数据库和内存缓存。这些新的基于ARM的实例是由AWS Nitro系统提供的,这是一种自定义AWS硬件和软件创新的集合,它能够提供高效、灵活和安全的云服务,并提供隔离的多租户、私有网络和快速本地存储,以减少客户在使用AWS时的开销和工作量。AWS Graviton 2处理器与第一代相比引入了几个新的性能优化。AWS Graviton 2处理器采用64位ARM Neoverse芯和自定义硅,由AWS设计,采用先进的7纳米制造技术。AWS Graviton 2处理器为云本机应用程序进行了优化,为科学和高性能的计算负载提供了每核心2倍的浮点性能,为更快的机器学习推理提供了优化的指令,为压缩工作负载提供了自定义硬件加速。AWS Graviton 2处理器还提供始终对完全加密的DDR 4内存,并提供50%的每核心加密性能,以进一步提高安全性。AWS Graviton 2支持实例提供了多达64个vCPU、25 Gbps的增强网络和18 Gbps的EBS带宽。客户还可以为所有新实例类型选择NVMe SSD本地实例存储变体(C6gd、M6gd和R6gd)或裸金属选项。几种开源软件发行版(AmazonLinux 2、Ubuntu、RedHat Enterprise Linux、SUSE Linux Enterprise Server、Fedora、Debian、FreeBSD)都支持新的实例类型。, 以及OpenJDK的Amazon Corretto发行版、容器服务(Docker Desktop、Amazon ECS、Amazon EKS)、代理(Amazon CloudWatch、AWS系统管理器、Amazon检查器)和开发工具(AWS Code Suite,Jenkins)。AWS服务,如AmazonElasticLoadyBalance、AmazonElestiCache和AmazonElasticMapReduce已经测试了AWS Graviton 2实例,发现它们提供了更好的价格/性能,并计划在2020年将它们投入生产。今天可以预览M6g实例。C6g、C6gd、M6gd、R6g和R6gd实例将在今后几个月内提供。要了解有关AWSGraviton 2支持实例的更多信息,请访问:https://aws.amazon.com/ec2/graviton.

AmazonEC2Inf1实例由AWS Inferentia芯片驱动,提供高性能和最低成本的云中机器学习推理。

来自不同行业的客户正在转向机器学习,以解决常见的用例(例如个性化购物建议、金融交易中的欺诈检测、增加客户对聊天机器人的参与等)。这些客户中的许多正在发展他们对机器学习的使用,从运行实验到扩大生产机器学习负载,在这些工作负载中,性能和效率非常重要。客户希望他们的机器学习应用程序具有高性能,以便提供尽可能好的最终用户体验。虽然培训理所当然地受到了很多关注,但推理实际上是生产中运行机器学习的大部分复杂性和成本(每花费一美元用于培训,多达9美元用于推理),从而限制了更广泛的应用,阻碍了客户的创新。此外,几个实时机器学习应用程序对推理执行的速度(延迟)很敏感,而其他批处理工作负载则需要为每秒处理多少推理(吞吐量)进行优化,这要求客户在为延迟或吞吐量优化的处理器之间进行选择。

使用AmazonEC2Inf1实例,客户可以在云中获得高性能和最低成本的机器学习推理,并且在生产中运行大型机器学习模型时不再需要在优化延迟和吞吐量之间进行次优权衡。AmazonEC2Inf1实例具有AWS Inferentia,这是由AWS设计的高性能机器学习推理芯片。AWS Inferentia提供了非常高的吞吐量,低延迟和持续的性能,为成本效益极高的实时和批处理推理应用程序.AWS Inferentia为多个框架(包括TensorFlow、PyTorch和Apache MXNet)和多个数据类型(包括int-8和混合精度的FP-16和bFloat 16)提供每秒128次Tera操作(每秒最高或万亿次操作),并为多个框架(包括TensorFlow、PyTorch和Apache MXNet)提供高达2000次的每个AmazonEC2Inf1实例。AmazonEC2Inf1实例构建在AWS Nitro系统上,这是一个自定义AWS硬件和软件创新的集合,它能够提供高效、灵活和安全的云服务,并具有隔离的多租户、私有网络和快速本地存储,以减少用户在使用AWS时的开销和工作量。AmazonEC2Inf1实例与AmazonEC2G4实例家族相比,提供了较低的推理延迟,最高可提高3倍的推理吞吐量,并且比AmazonEC2G4实例家族低40%,后者已经是云中可用的机器学习推理的最低成本实例。使用AmazonEC2Inf1实例,客户可以运行大规模机器学习推理,以最低的成本执行图像识别、语音识别、自然语言处理、个性化和欺诈检测等任务。AmazonEC2Inf1实例可以使用AWS深度学习Amis部署,并且可以通过托管服务(如AmazonSageMaker、AmazonElasticContainerService(ECS))获得。, 和亚马逊ElasticKubernetes服务(EKS)。要开始使用AmazonEC2Inf1实例,请访问:https://aws.amazon.com/ec2/instance-types/inf1.

AWS计算优化器使用机器学习驱动的实例推荐引擎,以便于选择正确的计算资源。

为工作负载选择正确的计算资源是一项重要任务。过度配置资源会导致不必要的成本,而不足则会导致性能低下.到目前为止,为了优化AmazonEC 2资源的使用,客户已经分配了系统工程师来分析资源利用率和性能数据,或者投资于资源来在各种工作负载上运行应用程序模拟。而且,随着时间的推移,这种努力可能会增加,因为随着应用程序和使用模式的改变,新的应用程序进入生产,以及新的硬件平台变得可用,必须重复资源选择过程。因此,客户有时会将他们的资源分配得效率低下,为昂贵的第三方解决方案付费,或者自己构建管理AmazonEC 2使用的优化解决方案。

AWS计算优化器提供直观和易于操作的AWS资源建议,以便客户可以为其工作负载确定最佳的AmazonEC 2实例类型,包括属于自动缩放组的实例类型,而不需要专门知识或投入大量时间和金钱。AWS计算优化器分析工作负载的配置和资源利用率,以确定数十个定义特征(例如,工作负载是否是CPU密集型的,还是显示每天的模式)。AWS计算优化器使用AWS建立的机器学习算法来分析这些特性,并确定工作负载所需的硬件资源占用空间。然后,AWS计算优化器推断工作负载将如何在各种AmazonEC 2实例上执行,并为该特定工作负载的最佳AWS计算资源提供建议。客户只需点击几下AWS管理控制台就可以激活AWS计算优化器。一旦激活,AWS计算优化器立即开始分析运行的AWS资源,观察它们的配置和AmazonCloudWatch度量历史,并根据它们的特性生成建议。要开始使用AWS计算优化器,请访问http://aws.amazon.com/compute-optimizer.

新的AWS传输网关功能增加了对IP多播协议的本地支持,使创建和监视可扩展的全局网络变得更加容易

AWS传输网关是一个网络枢纽,使客户能够轻松地扩展和管理亚马逊虚拟私有云(Vpc)与客户的现场数据中心之间的连接,以及AWS区域内数千台VPC之间的连接。今天,AWS为AWS传输网关增加了五种新的网络功能,简化了全球专用网络的管理,并启用了云中的多播工作负载:TransportGateway Multicastis--公共云中的第一个本地组播支持。当客户需要将同一数据的副本从单个源快速分发给多个订户(例如数字媒体或来自证券交易所的市场数据)时,他们使用多播路由。组播减少了整个网络的带宽,并确保终端用户在大致相同的时间内快速获得信息。到目前为止,客户还没有一种在云中托管多播应用程序的简单方法。大多数多播解决方案都是基于硬件的,因此客户被迫购买和维护不灵活和难以扩展的硬件。此外,他们还必须在单独的现场网络中维护这些硬件,这增加了复杂性和成本。有些人试图在没有本地支持的情况下在云中实现多播解决方案,但这些解决方案往往很难支持和扩展。通过传输网关多播,用户现在可以在云中构建多播应用程序,这些应用程序可以根据需求进行向上和向下扩展,而无需购买和维护自定义硬件来支持峰值负载。通过跨多个AWS区域连接传输网关,可以轻松地构建全球网络。在传输网关跨区域窥视之前,一个区域中vpc中的资源能够与不同区域vpc中的资源进行通信的唯一方式是通过这些vpc之间的对等连接。, 如果你需要很多的话,很难进行规模和管理。区域间监视解决了这一问题,通过在不同AWS区域之间监视AWS传输网关,可以轻松地创建安全和私有的全球网络。所有使用区域间窥视的区域之间的通信都由AWS骨干网络承载,因此它从不穿越公共互联网,而且它也是匿名和加密的。这提供了最高级别的安全性,同时确保客户的流量总是在区域间选择最佳路径,提供了比公共Internet更高和更一致的性能。加速站点到站点VPN是一种新的功能,为需要从分支位置安全访问AWS中服务的客户提供更安全、可预测和可执行的VPN体验。许多客户在较小的城市或国际地点设有分支机构,这些办事处实际上远离在AWS中运行的应用程序。它们使用Internet上的VPN连接将分支机构连接到AWS中的服务,但它们的连接通常会穿越Internet。这条路径包括通过多个公共网络跳过,这可能导致可靠性降低、性能不可预测和更多地暴露于试图进行的安全攻击。通过加速VPN,客户可以通过最近的AWS边缘位置将其分支位置连接到AWS传输网关,从而减少所涉及的网络跳数,并优化网络性能以获得更低的延迟和更高的一致性。与流行的软件定义广域网(SD-WAN)供应商的集成--一种新的功能使客户能够轻松地集成来自Cisco、阿鲁巴、银峰和Aviatrix的第三方SD-WAN解决方案和AWS。今天,许多客户使用sd-wan解决方案来管理其分支位置的本地网络。, 并将这些位置连接到AWS中的数据中心或云网络。要建立这些分支设备和网络连接,客户必须手动提供和配置他们的分支和AWS资源,这需要花费大量的时间和精力。虽然加速站点到站点VPN提供了来自分支位置的安全、可预测和可执行的连接,但这种SD-WAN集成更进一步,允许客户使用流行的第三方SD-WAN解决方案自动提供、配置和连接到AWS。所有这些都使客户更容易建立和管理与分支机构的连接,TransportGatewayNetworkManager是一种新的功能,它通过将所有的东西都聚集在一块玻璃中,简化了对云端和现场中的全球网络资源的监控。当客户将越来越多的应用程序转移到他们的云网络上运行时,他们仍然保持着一个就地网络,因此他们需要监控这两个环境,这样他们就可以在任何一个环境中快速响应任何问题。今天,为了管理和操作这些网络,客户需要在AWS和其他现场供应商之间使用多种工具和控制台。这种复杂性可能导致管理错误,或者延迟识别和解决问题所需的时间。NetworkManager通过提供一个统一的仪表板来简化全球专用网络的创建和操作,以可视化和监控客户VPC、TransportGatway、DirectConnect和VPN与分支机构和内部网络的连接。

AmazonVPC入侵路由帮助客户轻松地将第三方网络和安全设备集成到他们的网络中。

当企业迁移到AWS时,他们通常希望将他们多年来在本地数据中心使用的网络或安全设备作为虚拟设备带到云中。尽管AWS Marketplace在云市场上提供了最广泛的网络和安全虚拟设备,但客户缺乏通过这些设备轻松路由进入AmazonVPC的流量的灵活性。有了AmazonVPC侵入路由,客户现在可以将路由表与Internet网关(IGW)和虚拟专用网关(VGW)相关联,并定义将传入和传出VPC流量重定向到第三方设备的路由。这使得客户更容易在入站和出站流量的路由路径中集成满足其网络和安全需求的虚拟设备。只需几次单击或API调用,就可以提供此功能,使客户可以轻松地在网络中部署网络和安全设备,而无需创建不扩展的复杂解决方案。目前支持亚马逊VPC入轨路由的虚拟设备的合作伙伴包括128 Technology、Aviatrix、Barracuda、Check Point、Cisco、Citrix Systems、Fireye、Fortinet、Forcepoint、HashiCorp、IBM Security、Lastline、NetScout Systems、Palo Alto Networks、ShieldX网络、Sphos、Trend Micro、Valtix、Vectra和Versa网络。

“客户知道,AWS已经在AmazonEC 2中提供了比任何其他云提供商更广泛和更深入的功能。他们还告诉我们,这种广度和深度带来了巨大的好处,因为它们在云中运行着越来越多的不同的工作负载,每个负载都有自己的特点和需求,“AWS Compute Services副总裁马特·加曼(Matt Garman)说。“随着AWS试图为云端带来越来越多的工作负载,AWS继续以提供更好的性能和更低的价格的方式扩展我们的产品。今天的新的计算和联网能力表明,AWS致力于在AmazonEC 2上进行创新,以满足我们客户的多样化需求。“

Netflix是全球领先的互联网娱乐服务公司,在190个国家拥有1.58亿会员,享受着各种类型和语言的电视剧、纪录片和故事片。Netflix的性能和操作系统总监EdHunter说:“我们将AmazonEC2M实例类型用于许多工作负载,包括流、编码、数据处理和监视应用程序。”“我们使用行业标准LMbench和某些Java基准测试了新的M6g实例,比M5实例提高了50%。我们对基于AWS Graviton 2的AmazonEC 2实例的引入感到兴奋。“

尼尔森公司是一家全球性的测量和数据分析公司,为全世界的消费者和市场提供最完整和最可信的观点。“我们基于OpenJDK的Java应用程序用于收集数字数据,处理传入的Web请求,并根据业务需要重定向请求。这个应用程序是I/O密集型的,以成本有效的方式扩展是一个关键的要求,“尼尔森的SVP Digital克里斯·尼科特拉说。“我们无缝地将这个Java应用程序传输到由AWS Graviton处理器驱动的AmazonEC2A1实例。我们已经测试了新的基于Graviton 2的M6g实例,它能够处理A1负载的两倍。我们期待着在新的基于Graviton 2的实例上运行更多的工作负载。“

Datadog是云时代开发人员、操作人员和业务用户的监控和分析平台。“我们很高兴看到AWS继续投资于Graviton处理器,以及ARM生态系统背后的巨大动力,”Datadog的产品和社区副总裁IlanRabinovitch说。“我们很高兴地宣布,Graviton/ARM的Datadog代理现在已经普遍可用。客户现在可以轻松地监控Datadog中这些基于Graviton的AmazonEC 2实例的性能和可用性,以及它们的其他基础设施。“

作为数据基础设施的领先者,西部数据公司(WesternDigital)推动了所需的创新,以帮助客户获取、保存、访问和转变日益增长的数据多样性。“在西部数据公司,我们使用机器学习和数字成像技术对我们的制造过程进行质量检查。在制造业和其他地方,随着我们从被动检测转向主动检测,机器学习应用程序正变得越来越复杂。“西部数据公司首席信息官史蒂夫·菲尔波特(StevePhilpott)说。“目前,我们在当前基于CPU的解决方案上可以处理多少质量检查图像是有限的。我们期待着使用AWS EC2 Inf1 Inferentia实例,并期望为此目的提高西部数据公司的图像处理吞吐量,同时将处理时间减少到毫秒级。根据初步分析,我们希望能够每小时运行一次基于ML的检测模型,显著减少罕见事件的发生,并提高我们的最佳产品质量和可靠性。“

超过1亿的Alexa设备已经在全球销售,客户还在亚马逊上为Echo设备留下了超过40万的五星级评论。亚马逊Alexa高级副总裁汤姆·泰勒(Tom Taylor)说:“亚马逊Alexa的人工智能和ML智能由亚马逊网络服务(Amazon Web Services)提供,目前已有1亿多台设备可供使用。我们对客户的承诺是,Alexa总是变得更聪明、更健谈、更主动,甚至更令人愉快。”“实现这一承诺需要不断提高响应时间和机器学习基础设施的成本,这就是为什么我们很兴奋地使用AmazonEC2Inf1来降低对alexa文本到语音的推理延迟和每次推理的成本。”使用AmazonEC2Inf1,我们将能够为每月使用Alexa的数千万用户提供更好的服务。“

Nubank是拉丁美洲最大的金融科技公司之一,为超过1500万客户提供范围广泛的消费金融产品。我们开发了简单、安全和100%的数字解决方案,以帮助客户轻松管理他们的钱。为了支持我们平台上的数百万事务,我们在AmazonEC 2上运行了数百个不同需求集的应用程序。为了优化我们的基础设施,我们派出了一个工程团队来持续监测和分析我们EC2实例的使用情况,“Nubank工程总监RenanCapaverde说。“使用AWS计算优化器,我们将有一个玻璃窗格来确定我们的AWS环境中工作负载的最佳EC2实例类型。能够可视化各种实例类型的预计资源利用率,将使我们能够在更新车队之前做出更明智的决定。“

日本交易所集团(JPX)经营金融工具交易所市场,为市场用户提供交易上市证券和衍生工具的可靠场所。“我们对AWS传输网关上的IP组播功能的推出感到兴奋,”日本交易所集团(Japan Exchange Group,Inc.)高级执行官兼首席信息官横山良介(Ryusuke Yokoyama)说。他说:“作为市场经营者,我们的首要任务是改善日本市场参与者的方便和方便程度。”我们广泛地使用组播来进行市场数据的分发。AWS中提供的组播支持将使我们能够采用云本地数据分发,改善用户体验,同时为市场参与者提供更方便的访问。“

关于AmazonWeb服务

13年来,AmazonWebServices一直是世界上最全面和被广泛采用的云平台。AWS提供165项功能齐全的服务,包括计算、存储、数据库、联网、分析、机器人、机器学习和人工智能(AI)、物联网(物联网)、移动、安全、混合、虚拟和增强现实(VR和AR)、媒体和应用程序开发、部署和管理,涉及22个地理区域的69个可用区(AZS),并宣布计划在印度尼西亚、意大利、南非和西班牙增设13个可用性区和4个AWS区域。数以百万计的客户--包括增长最快的初创企业、最大的企业和领先的政府机构--相信AWS为他们的基础设施提供动力,变得更加灵活,成本更低。要了解有关AWS的更多信息,请访问aws.amazon.com。

关于亚马逊

亚马逊遵循四个原则:专注于客户而不是竞争对手,热衷于发明,致力于卓越的运营,以及长期的思考。顾客评论,1-点击购物,个性化推荐,Prime,亚马逊的实现,AWS,Kindle直接出版,Kindle,Fire平板电脑,Fire TV,Amazon Echo,和Alexa是亚马逊开创的一些产品和服务。欲了解更多信息,请访问Amazon.com/About并关注@AmazonNews。

在businesswire.com:https://www.businesswire.com/news/home/20191203005914/en/上查看源代码版本

亚马逊公司媒体热线Amazon-pr@Amazon.com www.amazon.com/pr

资料来源:AmazonWeb服务公司。(AWS)

AWS Amazon

热门推荐

收起
新浪财经公众号
新浪财经公众号

24小时滚动播报最新的财经资讯和视频,更多粉丝福利扫描二维码关注(sinafinance)

7X24小时

  • 12-11 甬金股份 603995 22.52
  • 12-10 嘉必优 688089 --
  • 12-05 天迈科技 300807 17.68
  • 12-04 锐明技术 002970 38
  • 12-04 芯源微 688037 26.97
  • 股市直播

    • 图文直播间
    • 视频直播间