Intel Vice President lectures in Cloud "What is the data center infrastructure that will support the next decade?"
How does Intel think, how to capture and act on clouds? Keynote lecture by Intel vice president and Kirk Skaugen, head of data center group business headquartersWhat is the data center infrastructure that supports the next decade?"Is held at the Tokyo International Forum for two days today and tomorrow"Cloud Computing World Tokyo 2011"&"Next Generation Data Center 2011So I went to ask.
According to the explanatory text before the keynote lecture, "The data center which becomes complicated with the rapid growth of the cloud is required to change.It solves the problem faced by IT and realizes the big advantage of cloud computing To be able to open, it is necessary to define an architecture based on the industry standard standard.In this presentation we will discuss the vision for the cloud in the future and the activities of Intel to realize it in the standardization organization, We will collaborate with the open data center alliance and the status of the cloud project of the Intel IT department. "There was a very interesting story indeed developed.
The following is a reproduction of the keynote lecture.
A tremendous amount of people, rows like this in the Tokyo International Forum. This is all people lining up for Intel's lecture.
The situation of the venue is like this
Level of standing out
Too many people to relay in another room
Mr. Kirk Skaugen, vice president of Intel and head of data center group business headquarters appeared
One billion people will be connected to the Internet in 2011 and from 1 billion to 2.5 billion people will be connected to the Internet in 2015. And people, people, machines, machines and machines are connected through social networks, etc. It is expected that 15 billion devices will be connected to the Internet indeed.
Although it seems that this number is very large, Ericsson and others are predicting that as many as 50 billion devices will be connected to the Internet by 2020. This means that TVs and mobile phones to which IP addresses are assigned, mobile phones, embedded devices, industrial robots, etc. will also be connected to the net.
As a result, it is expected that more than 1000 exabytes of data will be exchanged on the Internet by 2015, and by the year 2010 there are already 245 exabytes of data exchanges on the Internet.
In addition, it is estimated that more data communication will be done in 2015 than data communication done by 2010, and said that 4 times more data communication will be done in the next 4 years We are.
And this 1,000 exabyte data communication means that a lot of data will be generated. In order to handle this large amount of data, Intel has developed a single core to dual core, And we announced 10 cores to speed up, but as industry is going to have to make more efforts to further lower the core cost of the CPU It is.
Considering this from the business merit point of view, Intel is conscious that it should be able to save as much as $ 25 billion, and we are working on it to be able to do it by 2015. As to why you can do this, about 80% of the data center is an important part of the business, but about 80% of the cost now is just to operate, It is in a state that we have to secure the budget and continue it, that is, there are too many "waste". So I thought that by rethinking the energy efficiency of the data center, we should be able to use the operational budget more efficiently.
When looking at the world level, 2 to 3% of the energy consumption is energy consumption of the data center, which is why it is necessary to incorporate more efficient energy-efficient technology, resulting in 45 GW of thermal power Intel believes that by 2014 it will be possible to develop an efficient system that saves enough energy to save it and incorporate it into the global data center.
For example, 40% of the installed base is said to be a single core server, but by removing this single core server from the whole operation system and replacing it with one using Intel's latest core, It is possible to omit things servers. This will also lower cooling costs and software licenses, which will cost more for the first capital investment, but it will be recoverable in the first 2 months to 5 months.
Even now, the installation base server is getting aged at an instant, and if you replace it with a new core server it will still be possible to replace the initial investment in two to five months and still be able to replace it Because it is not, large energy waste is generated when considering energy efficiency, and it can be said that this is exactly a big issue.
While listening to stories from Internet advisors, the evolution of cloud computer systems is evolving with their own proprietary specifications, each vendor is not compatible, and the public cloud Even if data is exchanged using such as security issues such as what kind of secure form is taken, if solving such a problem, there is also the possibility of becoming an industry of over 100 billion dollars There is.
Intel has Cloud 2015 vision as one of the cloud vision.
This is not the future which is not so far, but that's because the answer for solving the problem is not there right now. Cloud is thought that data exchange is automatically carried out from your own cloud to other clouds via secure firewall via firewall, but as Intel, it is considered to be in various clouds now I think that preparation is not yet sufficient. For example, when making a travel expense report, if you follow the procedure of connecting to a network to create an expense report and submitting an expense report through the public cloud, the public cloud will not expand rapidly Attempting to develop a new computer chip or to newly deploy 20,000 CPUs will be said. In other words, just using a new computer chip to use that much energy is judging that public clouds are not yet secured.
Also, regarding automation, etc., the cloud is relatively intelligent rather than static, so more efficient use of the CPU, no correspondence such as hot spot correspondence is talked about, the cloud is more It is versatile and easy to use and you have to automate it.
As part of the concept that Intel is thinking, the current cloud system is the cloud with the performance of the client PC, so no matter how good the cloud it is, it can not be used if the performance of the PC itself is bad It is in the state that it is late. For this reason, the server on the data center side carries all parts of the client PC, extreme story, graphic functions on the client PC and computation functions are all performed by the cloud server side, and on the client PC side it works properly without graphic function , I wonder how good it would be if a cloud could be created. However, in order to do this, we need to back up hundreds of gigabytes of data, which means that we do not have enough time to do it every day, so it can not be done easily.
However, if we can improve the performance inherent to the client PC, we can lower the energy efficiency etc of the server on the data center, so we can think of a roadmap such as "Performance increases, but energy efficiency goes down" It is getting on.
Here, as an example of the data center Intel thinks, let us explain by Amano of Bit Isle Inc.
Presentation by Mr. Amano Amano, Inc. as an actual example from here
After independence from the company in 2000, I started as a data center in 2001. Characteristic of Bit of Aisle is that we built our own data center as an urban center Internet data center, firstly trying to manage ourselves. There are already four data centers in Tokyo and one data center in Osaka in the west has about 5500 racks in total, but next year there will be plans to increase the number of data centers further, It is planned to be about 6000 racks in total, so that the server will be equipped with more than 100,000 servers.
Approximately 10% of customers use IDC and ISP services using data centers, about 60% provide service providers operating online games and EC sites, 15% are system development, SIer, About 15% of the rest is in a state that end users are using.
It is only necessary to think that there are environments including multi-device which are very complicated in this environment. As protection, we also have a lot of things like customer management ID keys that we can not step into corresponding to users.
Also, along with that, we believe that private clouds exist among us, we need a complex operation system that we can access by using IDs It is okay if you can.
Regarding data centers in the future, as I mentioned before, we believe building problems related to power measures will also be related. Electricity demand is rising now more than ten years ago, the idea of energy conservation is indispensable to the data center in the first place, big loss is still occurring when converting only by the electricity bill. In addition, it is necessary to follow the Tokyo Metropolitan Security and Environmental Ordinance as well as environmental issues, and there is also power regulation from Tokyo Electric Power Company regarding the Tohoku earthquake of 3.11, which is also a big issue.
In addition, as for the future tasks, there are things such as IT equipment, power management in rack units, air conditioning management, server power management, for example.
When thinking about managing this kind of management, as a point to lower the power usage efficiency of the server, first of all, let's say, "Let's convert the consumption of server power to e-mail (total power of server power, how many watts There is an idea of throwing in e-mail "?" Then, when it comes to talking about how to realize it this time, use the Intel data center manager to record the maximum value from the history of power or to tap the peak of power That is why it became possible.
As a company with many devices like us, power management becomes a serious task, so it is very helpful to have something like Intel's data center manager provided by Intel.
In addition to limiting power, it is important to limit various thresholds so that detailed values such as day of the week and time can be taken and it is important to be able to gather various data, but first of all it is important , It means that the total consumption of electric power can be made visible, and it is very important that customers' needs and data that meets our needs can be obtained.
※ Bit Air Issuo Vice President Co., Ltd. left here
We believe that power management is a very promising technology. For example, if you are moving a server, if you need to switch to a backup power supply with a battery, you can reduce the size of the battery at this time by preliminarily managing power That's it.
Also, I would like to talk about how we will develop in the future and how to figure out the usage limits, but I would like to talk about the form of the alliance that Intel is thinking about.
Although we have been making efforts to standardize and evolve clouds and how to spread them, as a result, we have deployed as many as 16 million server units . This alliance is a very unique thing, especially the Open Data Center Alliance, which consists of IT companies and end users, has made efforts to develop as an open standard. However, instead of forcing such standardization movements, we are making efforts to make it easier to incorporate into the standard by incorporating it into ease of requirements definition from customers.
In adopting not only Intel but also USB technology devices developed from other companies, for example, by considering methods that end users can adopt, what they need, and so on in the future We are constantly doing standardization by doing standard introduction and how to incorporate technology broadly. By receiving such consultation from other companies, we are making efforts to further standardize.
Finally, we are working with the Open Data Center Alliance and thinking of making the Internet better. In addition, the solution stack is very reliable by the cloud builder, so we will keep security and administrative restrictions.
Thank you very much for taking your time today.