The company’s stance on working from home and flexible working has been doubled down by its CEO.
The new normal of work-where-you-like and those who see numbers coming through the office door as a proxy for productivity have differing opinions on the matter.
Those in the latter camp include Goldman Sachs CEO David Solomon, who has taken several opportunities to insist that his staff get back to the office full time, as well as UK Prime Minister Boris Johnson, who insisted the temptation of coffee and cheese posed a serious threat to the nation’s post-
The lease on an unbuilt 325,000 square foot (30,193 square meter) tower was canceled by the company. As he announced an end to the assumption that most staff would work from the office and introduced a flexible working plan, the president of the company said that the employee experience is about more than ping-pong tables and snacks.
An enforced return to the old normal won’t be successful, as was said this week by the CEO.
He said at the company conference in New York that office mandates are never going to work.
There may be a good reason for his thinking. If organizations don’t accommodate flexible and home working, they could be missing out on the in-demand IT workforce at the very least.
65 percent of IT employees say if they can work flexible hours will affect their decision to stay or go. In Europe, work-life balance trumped compensation for IT workers. It is equal to compensation for the first time in a decade, according to a new study.
Not everyone is in favor of giving employees more options about where to put the laptop. If employees sidestep a return to the same office and choose to work from home permanently post-pandemic, there may be reductions in pay. The search and cloud giant created a tool in June to calculate the wage implications of full-time remote working.
Sponsored: The future isn’t set in stone, and nor is the network
The four-day week is gaining steam as return-to-office attempts fail for large tech businesses.
The world’s biggest trial program began this week in the UK, with participants paying their employees a regular week’s pay for 80% of the labor. The pilot may be the largest, but it is not the only one.
Dell recently switched to a four-day week in the Netherlands after trialing it in Argentina, which is one of the reasons they decided to trial it on their own.
Bolt adopted the four-day weeks in January, while the team at Kickstarter was put on four days in March.
Thousands of workers from across the country are taking part in a four-day week trial in the UK from June to December this year.
There’s mounting evidence that four-day work weeks make employees more productive. David Simpson wrote that the number of hours spent at our desks doesn’t correlate with our happiness or productivity.
It’s hard to ignore the constant headlines about the changing nature of work since the coronaviruses epidemic. Most of the news has centered on return-to-office initiatives and their failures, but a four-day week had been a topic of discussion for some time.
A trial of the four-day work week in Japan resulted in a 40 percent increase in productivity. The four-day week was thrust into the spotlight at the same time as returning to the office became a hot topic after measured results weren’t published until 2021.
The 4 Day Week Global initiative, a non-profit convern that’s one of the organizations behind the current six-month four-day work week trial in Britain, might have something to do with the surge of interest. There are pilot programs in the US, Canada, the UK, Ireland, Australia and New Zealand, as well as support for companies considering the shift.
According to 4 Day Week Global’s research, job performance was maintained with a four-day work week, while stress levels dropped and work/life balance improved significantly, rising from 54 percent to 78 percent who were satisfied with theirs
In the current economic climate, 63 percent of businesses surveyed by 4 Day Week Global found it easier to attract and retain employees when they switched to a four-day work week.
Juliet Schor, lead researcher for 4 Day Week Global, said that companies have shown that you can be 100 percent productive in 80 percent of the time. Workers in industries are discovering that their days are filled with unnecessary activities that can be cut without hurting the business.
Schor said industries like health care and teaching, where staff is already stretched thin, wouldn’t be able to adapt to a four-day model
There is a challenge to operational transformation. A four-day week experiment could further burden those with an already overloading schedule if executed poorly.
“You’d have to become 25 percent more productive per day to justify the four-day work week,” said the Institute of Economic Affairs fellow.
The executive staff at Musk’s car company will not work from afar.
The New York Times obtained an email that Musk sent to the underlings of the company.
Anyone wishing to do remote work must be in the office for at least 40 hours per week. This is less than what factory workers are asked for.
If you have to ask, that’s probably not you, Musk allows that he may bend his rules for “particularly exceptional contributors” at his discretion. “office” as he defines it means main office, not some remote branch unrelated to one’s duties, is what the billionaire poly-boss says.
The company did not respond to a request to confirm the authenticity of the directive. Musk didn’t challenge its authenticity when responding to the Whole Mars Catalog’s request, as he outlined in his memo, to offer his thoughts on “people who think coming into work is an antiquated concept.”
Musk suggested that they pretend to work somewhere else.
The Register asked Musk if he would work from home or if he would spend his CEO hours in the company office. We don’t expect anything.
According to a Bernstein analyst, Musk’s marching orders are unlikely to reduce executive turnover at the company, which was estimated to be 44 percent for those reporting directly to the CEO in 2019. Other Silicon Valley companies average 9 percent.
Retention may be worse because of the ” olly olly oxen free” memo. The majority of workers preferred remote work at the start of the COVID-19 epidemic. 66 percent of job candidates aren’t willing to return to the office according to Robert Half A survey of 1,000 US hiring managers conducted by Upwork last year found that 40.7 million American professionals are expected to be fully remote in the next five years.
Silicon Valley firms have made waves by giving their workers more flexibility. Last month, the company declared that its employees could work from anywhere.
It doesn’t look good when companies don’t give their employees more flexibility. IBM ordered staff to work from one of its six main offices in order to encourage older workers to leave. Yahoo!’s ban on working from home didn’t have much of an effect on the company’s image.
Musk’s interest in selling cars meshes with the idea that people who don’t commute to work may be less inclined to buy a car.
On Wednesday, Musk took his case to the US Court of Appeals after a lower court denied his request to quash the settlement agreement.
The CEO of the company was in hot water with the watchdog after he claimed that he had secured the necessary funding to take the company private at $420 a share. Musk did not have the funding or approval to do so. The stock price went up over 10 per cent after investors started buying more shares.
The SEC accused Musk of fraud, saying that he misled the public and caused disruption in the market. After being sued by the US regulators, Musk agreed to pay $40 million in penalties and step down as chairman of the automaker’s board, as well as agreeing to screen all of his social media posts.
He wants to end that final part of the agreement. Musk’s legal team argued that the SEC doesn’t have the authority to control his free speech, and that it’s unfair for the watchdog to allow roving and investigations into Musk’s activities while he is restrained from posting.
The federal judge in New York denied the request. “Musk was not forced to enter into the consent decree; rather, ‘for his own strategic purposes, Musk, with the advice and assistance of counsel, entered into these agreements voluntarily, in order to secure the benefits of the agreement, including finality,'” district court judge Lewis Li
Once the spectre of the litigation is a distant memory and Musk’s company has become, in his estimation, all but invincible, he can’t seek to withdraw the agreement he knowingly and willingly entered at the time.
Today, Musk’s lawyers filed their intention to take the case to the Court of Appeals in an attempt to overturn the decision. It is not clear how the case will move forward.
Musk loves to talk about free speech or his definition of it, on and about the social network, and he is going to take questions from the workers on Thursday. He would like to free himself of his commitment to the SEC.
The SEC has been asked to comment by the Register.
After allowing a $12.5 billion margin loan against his stock to expire, Musk has to personally secure $33.5 billion to fund his $44 billion purchase ofTwitter.
An additional $6.25 billion in equity financing is on top of the original $27.3 billion, according to regulatory filings.
He had to provide $21 billion in equity and $12.5 billion in margin loans to complete the purchase of his social network. The margin loan was dropped to $6.25 billion on May 5.
It has been suggested that the world’s richest man could walk out on the deal unless there is proof of the number of bot accounts on the social media platform.
Some thought he was trying to bring the price down after he questioned the “less than 5 percent” claim.
Following Musk’s attempt to buy the social networking site, the stock of the company has taken a beating. Since the takeover was agreed, the electric carmaker has lost 25 percent of its value, with investors worried about slower growth, rising inflation and interest rates.
The original margin loan agreement had a value of $12.5 billion when Musk took over.
Abandoning the plan will help the company.
According to The Financial Times, Musk is trying to lower the funds he needs to power the deal by courting additional investors like Jack Dorsey.
At the time of writing, the share price of the company is $37.16. It is worth $54.20 per share.
Musk suggested changing the platform’s subscription service, banning advertising, and giving an option to pay in cryptocurrencies, in order to make it less censorious.
He wants to reverse Donald Trump’s permanent ban, which was brought after supporters of the former president went to the Capitol on January 6 last year, to protest the election of Joe Biden.
The ban was described by Musk as “a morally bad decision and foolish in the extreme” at the “Future of the Car” conference.
Musk said that banning Trump from the social media site didn’t end his voice. It will amplify it. This is the reason it’s morally wrong.
Some analysts think the deal will be completed.
Musk’s lawyers said on Monday that Musk is prepared to end his takeover of the social media site because it is covering up the number of fake accounts.
The all-cash deal worth over $44 billion was offered by Musk in April. The board members of the company resisted his attempt to take the company private. After securing $7.14 billion from investors, Musk sold another $8.4 billion worth of his shares in the company.
Morgan Stanley, Bank of America, and others pledged to lend the remaining $25.5 billion. The takeover appeared imminent as rumors swirled over how Musk wanted to make the company profitable and take it public again in a future IPO. But the tech billionaire got cold feet and started backing away from the deal last month, claiming it couldn’t go forward unless it proved fake accounts make up less than five per cent of all users
Musk has taken the issue even further. His lawyers wrote a letter to the chief legal officer of the company stating that his client is willing to pull out of the deal completely over the disagreement on fake accounts.
Musk agreed to pay a $1 billion breakup fee if he walked away from the takeover, even though he didn’t have to do any business due diligence. This latest letter could be an attempt to wriggle out of paying the fee, angling for a lower price tag on the business, or just ending it all.
The missive was published by the SEC and stated that Mr Musk believed the company was resisting and obstructing his information rights under the merger agreement.
This is a clear violation of the merger agreement and Mr Musk has the right to not complete the transaction and to end the agreement.
It’s still going on… Meanwhile… Texas Attorney General Ken Paxton, who denies charges of securities fraud against him, said Monday that he is looking into whether the website has broken the US state’s Deceptive Trade Practices Act by misleading people on the number of bot on the social network.
According to a statement from Paxton’s office, “Twitter has received intense scrutiny in recent weeks over claiming in its financial regulatory filings that fewer than 5 percent of all users are bots when they may in fact comprise as much as 20% or more.” It’s possible that the difference could affect the cost to Texas consumers.
He said it was difficult to come up with an exact figure since not all accounts were bots. In its most recent quarter of financial results, the average international monthly active users was 189.4 million, up 18.1% compared to the same quarter of the previous year.
Musk thinks the numbers are not correct. He thinks there are more bot accounts than they think. Musk’s lawyers claimed that attempts to get more data from the social network have been unsuccessful. According to the letter, Mr Musk has been asking for information from the company for over a year to help with his evaluation of fake accounts.
The company’s latest offer to simply provide additional details regarding the company’s own testing methodologies, whether through written materials or verbal explanations, is akin to refusing Mr. Musk’s data requests.” It is only an attempt to obfuscate and confuse the issue. Mr. Musk has made it clear that he doesn’t believe the company’s testing methodologies are adequate so he needs to conduct his own analysis. The data he requested is necessary to do that.
The company is still pushing to close the deal and hit back at the claims, according to a statement to The Register.
We will continue to share information with Mr Musk in order to complete the transaction. This agreement is in the best interest of all shareholders. The representative told us that they intended to close the transaction and enforce the merger agreement.
Musk said his bid to acquire and privatize the social network cannot move forward until it proves that fake bot accounts make up less than five percent of users.
Last month, the world’s richest meme lord bought a 9.2 per cent stake in the micro-messaging service. He asked if he could buy the social media platform for $54.20 per share, after he declined an offer to join the board. A “poison pill” was put in place by the board to stop a hostile takeover before they accepted the deal.
It appears that Musk spotted something in the SEC filing. Musk objected to the figure of less than five percent because he felt that it was too low. He had previously stated that he was going to take over the social network.
After Musk kicked up a fuss and vowed to investigate, Twitter CEO Parag Agrawal responded with a string of messages defending the less than five percent figure, to which Musk replied with a poop. As part of the takeover red-tape, the supremo had promised not to insult the representatives of the company.
On Tuesday, Musk said that he was pausing his proposed purchase until he could see proof that the figure was legit.
The chief said that his offer was based on the SEC’s reports.
“Yesterday, Twitter’s CEO publicly refused to show proof of <5%. This deal cannot move forward until he does."
The average US mDAU was 38.6 million, up 6.4 percent from the previous year, and the average international mDAU was 189.4 million, up 3 percent. The overstatement of mDAU from Q1’19 to Q4’21 was caused by an account linking error.
It was difficult to detect and remove bots. Some accounts are legit, some are fake. He said it would be hard to figure out the true number of mDAUs because the estimates are based on user behavior. He claimed that the internal estimates for the past four quarters have shown that less than five per cent of all accounts are used by people who use the word “smolder”.
A sale to Musk is still on the table. He said he was looking forward to talking with the billionaire. The biz has filed a preliminary proxy statement with the SEC that reiterates Musk’s offer to purchase the company for $54.20 per share in cash. “We are committed to completing the transaction on the agreed price and terms as quickly as possible,” it said.
The transaction is subject to the approval of Twitter stockholders, the receipt of applicable regulatory approvals and the satisfaction of other customary closing conditions, and is expected to close in 2022, according to the statement.
Musk previously promised to front $21 billion from his own fortunes, and the remaining $25.5 billion will be backed by Morgan Stanley, Bank of America, and others. He secured another $7.14 billion from investors and sold $8.4 billion worth ofTesla shares to help finance the deal. He might have to pay $1 billion in penalty fees if he backs out of the deal.
The social network did not comment. Its share price is down 18 percent in the last five days.
It was a good week for free speech advocates as a judge ruled that copyrighted law cannot be used to circumvent First Amendment anonymity protections.
The decision from the US District Court for the Northern District of California overturns a previous ruling that compelled the social media site to reveal the identity of a user accused of violating theDMCA.
The EFF said that the ruling confirms that the Constitution’s test before identifying anonymous speakers is still valid.
The case involves an anonymous account that made statements about wealthy people, including Jeff Bezos and Nancy Pelosi. In 2020, the account posted photos of an alleged partner of a private equity billionaire and accused him of having an extramarital affair.
After the photos were published, a company that claimed to own the rights to the photos filed a DMCA request with the social network. The company went to court to get the name of the person behind the account that sent the images.
The judge in the case said that a two-step inquiry must be conducted to determine if a request for the identity of an anonymous speaker should beGRANTED.
First, the party seeking the disclosure needs to show a strong case on the merits of its underlying claim. The court balances the need for discovery against the First Amendment interest at stake, as well as explaining how the entity that filed the claims failed to show proof of either.
The court said in the second instance that there is no question that significant First Amendment interests are at stake. Brian Sheth, the private equity tycoon at the center of the case, had to be protected because of potential retaliation.
The anonymity of its users has been the subject of legal battles. In 2010 it went to court to fight a Pennsylvania grand jury subpoena to turn over the identities of two users who made critical comments about Pennsylvania’s Attorney General.
The company accidentally gave personal information about its users to advertisers, including email addresses and phone numbers used to register accounts. It is possible that your online privacy mileage varies.
It does set the stage for future potential rulings on other platforms, from social media to websites.
If a trio of Senate Democrats get their way, the US could have a law similar to the EU’s universal charging mandate.
The proliferation of charging standards has created a messy situation for consumers, as well as being an environmental risk, according to a letter written by two Massachusetts senators.
As specialized chargers become obsolete, or as consumers change the brand of phone or device that they use, their outdated chargers are usually just thrown away. Statistics from the European Commission show that more than 11,000 tons of e-waste are created annually by discarded and unused chargers.
The senators argued that consumers are also affected. 40 percent of consumers reported that they were unable to charge their device due to not having the right cable, despite the fact that the average consumer owned three mobile phone chargers.
The EU’s response was to shut the door on its single market to anyone who didn’t use the standard. Commissioners voted early this month to make that policy law, giving tech companies 24 months to adapt.
The iMaker is definitely a central subject, even though this type of laws don’t single out Apple. Speaking to reporters at a press conference announcing a new law, Maltese MP Alex Agius Saliba said the rule applies to everyone, and called Apple out.
“Apple has to follow the law,” he said.
In the second half of the 20th century, Apple may dump the Lightning port used in its phones in favor of the USB-C port, according to an analyst in May. He said that this could speed transfer and charging rates.
All electronics, including cameras, phones, tablets, earbuds, speakers and the like, must have a port that can deliver power. The iPhone is the only device in the world that has a Lightning Port.
“We urge the Department of Commerce to follow the EU’s lead by developing a comprehensive strategy to address unnecessary consumer costs, mitigate e-waste, and restore sanity and certainty to the process of purchasing new electronics,” the senators’ letter said.
It’s not a stretch to say most people would be happy to ditch cable spiderwebs and cluttered drawers for a fewusb-c cords. Some cables are designed to only deliver power, not data. The confusion stems from the fact that the ports can be configured for power alone.
If the Department of Commerce takes action, it will need to be very precise to prevent USB-C from becoming another addition to a confusing maze where the wrong cable is still the wrong cable but with the added fact they all look the same.
The inability of Congress to pass pending legislation that includes tens of billions of dollars in subsidies to boost semiconductor manufacturing and R&D in the country has made Big Tech in America angry.
In a letter [ PDF ] sent to Senate and House leaders Wednesday, the CEOs of Alphabet, Amazon, Dell, IBM, Microsoft, and dozens of other tech and tech-adjacent companies urged the two chambers of Congress to reach consensus on a long-stalled.
The rest of the world doesn’t want the US to act. It is imperative that Congress act to enhance US competitiveness, as our global competitors are investing in their industry, their workers, and their economies.
The Semiconductor Industry Association organized the missive, which was signed by top executives in the industry, including Lisa Su, Intel CEO Pat Gelsinger, and GlobalFoundries CEO Thomas Caufield.
The association said it hopes the final legislation will include a measure for investment tax credits that Semiconductor manufacturing and design companies can use in addition to the $52 billion in chip subsidies that has been the heart of the bill.
Tech executives are frustrated after the US competitiveness bill has been stuck in Congress for months. The House of Representatives passed its version of the legislation in February while the Senate passed its version in June 2021, but the Senate and House have been trying to reconcile their differences recently.
“We’ve already wasted several quarters since the Senate acted last year, and now it’s time for us to move forward quickly,” Gelsinger told Congress back in March.
The problem is that the US has been behind other Asian countries in the manufacturing of chips. The US share of chipmaking fell from 37 percent in 1990 to 12 percent in today’s dollars. 80% of chip production is in Asia.
Tech companies and government officials have pushed for chip subsidies because of a variety of reasons, including fighting against future chip shortages and inflation, reducing reliance on chipmakers in Asia, and hedging against future instability.
The legislation risks collapsing in Congress in the face of increased skepticism from Republicans, as well as the fact that the country is facing other issues, like the seemingly never ending problem of gun violence.
Some Democrats and Republicans are worried that the White House hasn’t done enough to get Congress to support the bill. The private sector hasn’t done enough to communicate the importance of passing the bill to politicians, according to White House officials.
The Semiconductor Industry Association was able to get executives at more than 120 companies to sign a letter. The letter is rather short, but it got to the point in the second paragraph.
While many of the signatories of the Wednesday letter represent US companies, there are a few foreign firms represented too.
Taiwan-based TSMC and South Korea’s SAMSUNG are in the process of building new manufacturing plants in Arizona and Texas in order to get their share of the US chip subsidy.
The companies that have benefited from generous support in their home countries spoke out in March about the need for the US to consider foreign firms when giving out chip cash After Intel proposed that the funding only be used for domestic companies, the x86 giant became silent on the matter.
“Arbitrary favoritism and preferential treatment based on the location of a company’s headquarters is not an effective or efficient use of the grant and ignores the reality of public ownership for most of the leading Semiconductor companies,” said TSMC in a statement to the US Department of Commerce.
Many foreign and domestic companies hope the US will use taxpayer dollars to boost chip manufacturing and research. It’s been difficult to make the bill a priority because of several issues facing the US, such as gun violence, inflation, and attempts to subvert democracy.
You probably know that Intel made major manufacturing mistakes over the past several years, giving rivals likeAMD a major advantage, and now the x86 giant is in the midst of an ambitious five-year plan to regain its chip-making mojo.
This week, Intel is expected to detail just how it’s going to make chips in the near future that are faster, less costly and more reliable, from a manufacturing standpoint, at a Symposium on VLSI Technology and Circuits that begins on Monday. The Register and other media were given a sneak peek.
The chipmaker’s 7nm process was known as Intel 4. The compute tiles for the Meteor Lake CPUs for PCs and the Granite Rapids server chips are going to be used by Intel next year.
Intel has promised that Intel 4 will deliver a 20 percent improvement in performance per watt over Intel 7.
Ben Sell, the executive in charge of Intel 4 development, said in the briefing that his team has been able to achieve a 21.5 percent performance improvement for Intel 4 over Intel 7 at the same power level. 40 percent less power can be used by Intel 4 compared to Intel 7.
We always hope for better efficiency with new chips, but this means that future chips like Meteor Lake will have better performance, which we always hope for. It is possible to improve how long a laptop battery lasts or how much power a PC needs.
It’s positive that the progress on Intel 4 is. “We want to be right where we want to be,” said Sell, who is the vice president of technology development.
One advancement Sell’s team made to boost the frequency of Intel 4 is a 2x increase of the metal-insulator-metal capacitor, which is a building block Intel has used for chips since the 14nm process that debuted in 2014 with the BroadwellCPU
Increased capacitance results in fewer large voltage swings, which in turn increases the available voltage to the CPUs and allows it to run at a higher Frequency.
“What we have seen on products is that, overall, this equates to a higher frequency that you can run the product at,” he said.
Making the chip-making process more reliable, reducing the costs, and improving performance are some of the things that are important for a new manufacturing area. Sell said his team has made good progress thanks to Intel 4’s use of EUV lithography, an advanced process that uses extreme ultraviolet light to etch a chip design onto Silicon.
Sell says that EUV has allowed Intel to simplify the process. This means that Intel can cut the number of layers needed to etch designs onto wafers from five to one.
He said that everything now can be printed with a single layer.
Sell told us that when new chips enter production, the number of wafers with defects will go down.
Even though using EUV is expensive, it will lower Intel’s chip-making costs for products using Intel 4. According to Sell, EUV reduces the number of steps and the number of tools needed to make chips.
A lot of the other tools that we have in our factory are not needed once you combine everything to a single step.
The simplified process could allow Intel to increase production capacity.
That means you don’t get as much demand for the space you need. Sell said that either you need to build fewer fabs or you can get more output from them.
These and other process improvements are part of a modular approach that Intel is taking to how it develops new nodes. This is a big change from the chipmaker’s previous approach, which resulted in major delays and gaffes in the development of the 10 and 7nm chips over the last several years.
Rather than having one giant step, we’re going into a much more modular approach, which means you have a few smaller steps and a few modules in the process. It’s a lot easier to get each module developed in time, without the complexity of having everything else solved to understand this module.
The tech giant has admitted that the new M2 chip isn’t as good as its predecessor, which was compared to the latest from Intel.
Apple focused its high-level sales pitch for the M2 on claims that it is more power efficient than Intel’s latest laptop CPUs. While doing so, the iPhone maker admitted that Intel has the better performance at the moment.
Johny Srouji, Apple’s senior vice president of hardware technologies, said during the presentation that the M2’s eight-core CPUs will provide 87 percent of the peak performance of Intel’s 12-core Core i7-1260P while using just a single core.
The M2 is more power efficient than the Apple processor. To enlarge, click here.
In other words, Intel’s Core i7-1260P is 15 percent faster than Apple’s M2, and that’s not even considering the fact that Intel has two more powerful i7s in its P-series lineup.
The company claimed that the M2’s processor is 1.9x faster than Intel’s 10-core Core i7-1255U while using the same amount of power, but the fact is that Apple doesn’t have a CPU for ultra-low power.
When the M1 was released in 2020, Apple argued that performance-per-watt was the more important metric.
Our approach is different than others in the industry, which increases power to gain performance. Power efficient performance is a constant focus. Srouji said, “Maximizing performance while minimizing power consumption”
Performance-per- watt isn’t the only way Apple hopes the M2 will stand out when it lands in the MacBook Air and MacBook Pro next month.
The tech giant is making a bigger bet on the chip’s neural engine because it believes more applications will rely on graphics and artificial intelligence in the future.
The M2 is still an amazing chip, especially its neural engine. You can click to enlarge.
Apple’s decision to dedicate more transistors for the M2’s 10-core GPU and 16-core neural engine is reflected by this. The Mac maker was able to claim a 35 percent boost for the graphics card and 40 percent boost for the neural engine compared to the M1, thanks to these design decisions. The M2’s processor only improved by 18 percent in multi-threaded performance, according to Apple.
It’s important to put more weight on theGPU and neural engine since they could make a bigger difference, since applications that are heavily reliant on theCPU, like web browsers, don’t have as great of a need for faster chips.
Web browsers don’t need a lot of more performance, so the comparison on industry performance is probably less relevant in my mind. According to The Register, Apple wants to show they are competitive with Intel and that they may be ahead with neural processing and better graphics
If you believe Apple’s claims, the M2’s graphics card looks good. You can enlarge by clicking.
While Apple didn’t give a competitive comparison for the M2’s neural engine, it did claim that the 10-core GPU is 2.3x faster than the integrated graphics within Intel’s Core i7-1255U while using the same power. The peak performance of the i7-1255U can be provided by the M2’s graphics card, but only one-fifth of the power is used. The i7-1260P has a faster built-in graphics card than the i7-1255U, but Apple didn’t provide a comparison.
Apple may not have the fastest processor in the industry for an ultra-light laptop. The growing trend in the compute world is that having a faster central brain may be more important than having dedicated accelerators for more important areas.
After taking serious market share from Intel over the last few years, Advanced Micro Devices has revealed larger ambitions in artificial intelligence, datacenters and other areas.
A renewed focus on building better and faster chips for server and other devices, as well as becoming a bigger player in artificial intelligence, were some of the ambitions laid out by the company at the Financial Analysts Day event on Thursday.
“These are where we think we can win in terms of differentiating ourselves,” Lisa Su said in her opening remarks. It is about compute technology leadership. It is about expanding leadership in the data center. It’s about broadening our footprint. We’re expanding our software capability. We think this is a growth area going forward, and that’s why it’s bringing together a broader custom solutions effort.
At the event, the company revealed new plans for its server and client chips, as well as plans to introduce a new kind of hybrid chip for datacenters.
The company also announced plans for a unified software interface for programming artificial intelligence applications on different kinds of chips, which is similar to Intel’s OneAPI toolkit.
The chip designer wants to expand its custom chip business beyond the video game console space into new areas like hyperscale datacenters, automotive, and 5G.
We have a very broad high- performance portfolio. We have a leading industry platform for chiplets. Su said that they are going to make it easier to add third-party intellectual property to the chiplet platform.
The upcoming chips will be based on the Zen 4 architecture and include the Ryzen 7000 desktop chips and the Genoa server chips.
The next-generation Zen 5 architecture will arrive in 2024 with machine learning and artificial intelligence, along with enhanced performance and efficiency.
Zen 4 will be the first high- performance x86 architecture to use a 5nm manufacturing process and will have an 8–10 percent increase in instructions per clock over Zen 3, according to the previews. Zen 3 provided a larger increase in the IPC than Zen 2.
A more than 15 percent boost in single-threaded performance and up to 125 percent more memory bandwidth per core are some of the things we can expect from the new architecture. Zen 4 chips will have instruction set extensions for AVX-512 and Artificial Intelligence.
A Zen 4 desktop processor with 16 cores will provide more than a 25 percent boost in performance-per- watt and a greater than 35 percent boost in overall performance over Zen 3, according to a statement from the company.
There is a new Zen architecture roadmap from Advanced Micro Devices. To enlarge, click here.
There will be a version of Zen 4 that uses its vertical cache technology, as well as a version of Zen 4 that uses both 5nanometer and 4nanometer process nodes, according to an updated Zen roadmap. This is in addition to the Zen 4c architecture that the company is using.
The situation for Zen 5 will be the same in a few years. There will be Zen 5c and a vertical cache variant.
The next generation of general-purpose server CPUs, Genoa, is on track to launch in the fourth quarter. In the first half of the 20th century, the company will release its first lineup of cloud-optimized server CPUs, called Bergamo.
There are different versions of Epyc that serve different product areas, starting with the third-generation chip. The product groups were divided into two groups, one for general-purpose chips and the other for technical computing chips. With the upcoming Zen 4c-powered Bergamo chips, Epyc is expanding again.
With the general-purpose Genoa chips coming later this year,AMD is promising to deliver “leadership sockets and per-core performance” with up to 96 Zen 4 cores as well as “leadership memory bandwidth and capacity” with up to 12 channels ofDDR5 memory. The Genoa chip will provide more than 75 percent faster Java performance than the top third-gen Epyc chip, according to a chip designer.
Genoa chips will be able to connect to the internet with the help of the Connect Express Link. We should expect to see improvements in confidential computing features with Genoa, including things like memory encryption, as well as CXL-related capabilities.
The top chip in the line will provide double the cloud container density compared to its third-gen Epyc chip, according to the company. The gain is being driven by the fact that Bergamo chips will feature up to 128 Zen 4c cores and up to 128 threads while supporting 12 channels of memory and PCIe 5
It is compatible with the Zen 4 instruction set and will not require the rewrite of any code for applications, as it is compatible with the Genoa’sSP5 server platform.
There’s a new Epyc server on the way. To enlarge, click here.
The Zen 4 architecture will be extended to two more sets of Epyc chips with the general-purpose Genoa chips and cloud-optimized Bergamo chips coming. The Genoa-X will be the successor to Milan-X and will focus on technical computing and database applications with up to 96 cores and a massive L3 cache.
The intelligent edge and telecommunications are new focus areas for the Epyc chips. It will come with up to 64 core in a lower cost platform.
The code name for the fifth generation of Epyc will be Turin, according to an announcement from the company.
The Instinct MI300 chip, which will mix a Zen 4-based Epyc CPUs with a GPU that uses its fresh CDNA 3 architecture, is a sign of bolder ambitions by the company to compete with Intel and Nvidia in the accelerator space.
The Instinct MI300 will be the world’s first “accelerated processing unit” and will be called the “world’s first datacenter APU” by the chip designer.
This means that in the next two years, all of the major chipmakers will have hybrid chips. The Grace Hopper Superchip will be released early next year, and the Falcon Shores XPU will be released in the year 2024.
The Instinct MI300 is said to deliver an 8x boost in training performance over the Instinct MI250X, which was launched last fall as part of a set of datacenter GPUs that are more competitive against the A 100 than previous attempts. It’s promising “leadership memory bandwidth” and application latency.
The Instinct MI300 will provide more than a 5x increase in performance-per-watt compared to the Instinct MI200 series, according to the company. This will be achieved by using a 5nm process, 3D chiplet packaging, and a unified memory architecture that will allow the CPU and GPU to share memory, as well as new generations ofAMD’s Infinity cache and Infinity architecture.
The Instinct MI300 will use “groundbreaking” 3D packaging for high-bandwidth memory. The architecture’s design will allow the APU to use less power compared to implementations that use more than one processor.
After teasing some integration plans last month, the company is now providing more details of how it plans to use Xilinx’s artificial intelligence and fabric technologies in multiple products.
The “adaptive architecture” building blocks will be called “XDNA” after the name of the artificial intelligence engine.
The dataflow architecture of the artificial intelligence engine makes it a good choice for signal processing applications that need a mix of high performance and energy efficiency.
The adaptive interconnect that comes with the FPGA fabric is based on the logic and memory of the FPGA.
After teasing plans in May, the company said it plans to use the artificial intelligence engine in future Ryzen processors, which include two future generations of laptop CPUs coming over the next few years. The company said that it will use an artificial intelligence engine in the future.
With the goal of giving developers a single interface to program across different kinds of chips, the company promised to unify previously disparate software stacks for CPUs, GPUs and adaptive chips into one.
The first version of the Unified Artificial Intelligence Stack will bring together the ROCm software from Advanced Micro Devices, the CPU software from Intel, and the Vitis artificial intelligence software from Xilinx.
There are software stacks that are brought together into one. To enlarge, click here.
The ability to use popular artificial intelligence frameworks like PyTorch and TensorFlow will be made available by this.
We’re going to unify even more of the stuff. We’re going to have a lot of similarities in our library and in our graph compiler. “We’re definitely also going to roll out a lot more pre-optimized models for these targets,” said Victor Peng, who is now head of the adaptive and embedded computing group.
Some details of new consumer-driven products coming out over the next few years were shared in the tail end of the event.
In the future, consumers can expect two different types of desktop chips, one of which will use the 3D vertical cache technology that was introduced with the Ryzen 7 5800X3D earlier this year.
The latest Ryzen desktop roadmap is from the company. To enlarge, click.
The code name Granite Ridge is the code name for the Zen 5 architecture that will be used in desktop chips by the year 2024.
The new generation of Ryzen chips, code-named Phoenix Point, will use Zen 4 and the company’s new RDNA 3 architecture for integrated graphics when it arrives in 2023. The next generation of the company, known as Strix Point, will use Zen 5 and an improved version of RDNA 3. Both of the chips will use artificial intelligence engines from the XDNA adaptive architecture portfolio.
There is a new laptop roadmap fromAMD. To enlarge, click on it.
The upcoming Navi 3 products will be based on the RDNA 3 architecture, which has been disclosed. The company said it will provide “industry-leading performance-per- watt” as well as “system-level efficiency and advanced multimedia capabilities.”
The RDNA 3 architecture will combine a chiplet design, a next-generation Infinity cache and a 5nanometer process, which will allow it to provide a greater than 50 percent boost in performance-per-watt compared to the RDNA 2 architecture that is powered by the most recent Radeon products The RDNA 4 architecture will be the next architecture to be developed by the company.
With an expanded portfolio of chips that cover everything from the edge to the cloud, AMD is hoping to become a big player in the artificial intelligence compute space.
It’s an ambitious goal, given the dominance of Nvidia in the space and the increasing competition from Intel and several other companies.
During the financial analyst day event last week, executives from the chip designer said that they believe they have the right tools in place to pursue the wider artificial intelligence space.
“Our vision is to provide a broad technology roadmap across training and inference that touches cloud, edge and endpoint, and we can do that because we have exposure to all of those markets and all of those products,” said Lisa Su in her opening remarks at the end.
It will take a lot of work for the company to catch up in the space, but Su said the market is the company’s single highest growth opportunity.
At last week’s event, executives from the company said that they have begun to see some traction in the market for artificial intelligence compute with the use of the company’s chips for inference and Instinct for training.
According to Dan McNamara, the head of the AMD’s Epyc business, multiple cloud service providers are already using the company’s software optimizations via its ZenDNN library to provide a “very nice performance boost” to their recommendation engines.
Zen Deep Neural Network is supported by the second and third generation of Epyc chips and is integrated with popular frameworks.
“I think it’s important to say that a large percentage of the inference is happening in the processor and we expect that to continue going forward,” he said.
More artificial intelligence capabilities will be introduced at the hardware level by the near future.
The AVX-512 VNNI instruction, which will be introduced to accelerate neural network processing in the next-generation Epyc chips, is part of this.
VNNI will be present in the company’s Ryzen 7000 desktop chips that are due by the end of the year because of this capability being implemented in Genoa’s Zen 4 architecture.
The use of the artificial intelligence engine technology from the $49 billion acquisition of Xilinx will be used by the company to expand the capabilities of its central processing units.
Building blocks will be in future chips. You can enlarge by clicking.
The artificial intelligence engine will be incorporated into several new products across the company’s portfolio.
The artificial intelligence engine will be used in two future generations of the Ryzen laptop chips. The first is code-named Phoenix Point and will arrive in 2023, while the second is code-named Strix Point and will arrive in 2024. The artificial intelligence engine will be used in a future generation of server chips, though it’s not yet known when that will happen.
The first chips using the next-generation Zen 5 architecture are expected to debut in the year 2024.
The most recent generation of Instinct GPUs, the MI200 series, has made some headway in the field of artificial intelligence training, and it’s hoping to make even more progress in the near future with new software improvements.
For instance, in the latest version of ROCm, there are new features for training and inference workloads.
David Wang, the head of the graphics business at the company, said that the company has expanded ROCm support to its consumer-focused Radeon graphics cards.
He said that they’re developing SDKs with pre-optimized models to ease the development and deployment of AI applications.
Microsoft and Meta are two of the key leaders in the industry that have developed partnerships with the company.
He said that the ROCm for PyTorch has been improved to deliver amazing, very, very competitive performance for their internal artificial intelligence and open-sourced benchmarks.
The Instinct MI300 is the company’s “world’s first datacenter APU” and uses the company’s new CDNA 3 architecture, which it hopes will make it even more competitive in the artificial intelligence space.
The Instinct MI300 is expected to deliver an 8x boost in performance over the Instinct MI250X chip that is currently in the market.
Forrest Norrod, head of the Datacenter Solutions Business Group, said that the MI300 is a truly amazing part, and that it points the direction of the future of acceleration.
The chip designer made it clear that the acquisition will help the company cover a wider range of opportunities in the artificial intelligence space, and that it will also strengthen its software offerings. If they want to compete with others they need the latter.
This was laid out by Victor Peng, the former CEO of Xilinx, who is now the head of theAdaptive and Embedded Group at Advanced Micro Devices.
Prior to the acquisition of Xilinx, the coverage of the artificial intelligence compute space was mostly in the cloud, enterprises, and homes with the company’s chips.
The chip designer has more coverage in the artificial intelligence market now that his portfolio is under the umbrella of the company. Health care and life sciences, transportation, smart retail, smart cities, and intelligent factories are just a few of the industries where the adaptive chips from Xilinx are used. Telecommunications providers use Xilinx’s Versal adaptive chips. The Kintex and Alveo are used in the cloud.
Several industries in the artificial intelligence compute space are covered by the products of AMD. You can click to enlarge.
The heavy-duty training is happening in the cloud, but we are in some areas that are doing artificial intelligence.
The products of Xilinx are “very complimentary” with the portfolio of products of Advanced Micro Devices. The company is looking at its combined offerings for a wide range of application needs.
- Ryzen and Epyc CPUs, including future Ryzen CPUs with the AI engine, will cover small to medium models for training and inference
- Epyc CPUs with the AI engine, Radeon GPUs and Versal chips will cover medium to large models for training and inference
- Instinct GPUs and Xilinx’s adaptive chips will cover very large models for training and inference
When we start incorporating artificial intelligence into more of our products and go to the next generation, we cover a lot of the space across the models.
Different parts of the spectrum are covered by different parts of the chips. To enlarge, click on it.
If the company wants broader industry adoption of its chips, it will need to ensure that developers can easily program their applications across this cornucopia of Silicon.
That’s why the chip designer wants to unify the software stacks for adaptive chips into one interface. The first version of the ROCm software will bring together the software from the two companies to provide a unified development and deployment tool for inference.
The company plans to consolidate more software components in the future so that, for instance, developers only have to use one machine learning graph compiler for any chip type.
People can hit any one of these target architectures in the same environment. In the next generation, we’re going to unify more of the middleware.
It will require a lot of heavy lifting and doing right by developers for such a strategy to work, as was laid out by the company.
Interview 2023 is shaping up to be a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which seems steadfast in its belief in the future of Arm, even if it can’t own the company.
Several system vendors are expected to use Arm-based chips next year. The Grace-Hopper Superchip brings together one GraceCPU with one HopperGPU, which is a combination of two Grace CPUs.
American companies like Dell Technologies, HPE and Supermicro, as well as Hong Kong’s Inspur and China’s ASUS are among the vendors lined up for the server market. Artificial intelligence training and inference, high-performance computing, digital twins, and cloud gaming and graphics are some of the areas where the server will focus.
The chip designer is hoping to lure operators and developers to the Arm side with the promise of some major advancement over x86 chips currently in the market.
Up to 1 terabytes of error-correcting LPDDR5x memory and as much as 1 Terabyte/s of memory bandwidth are included in the Grace Superchip. The Grace Hopper Superchip’s two CPUs are able to communicate with each other thanks to the 900GB/s NCLink-C2C interconnect tech that is being used by Nvidia.
“What Grace allows us is to push the boundaries of innovations and address the gaps that are there in the market,” Paresh Kharya, director of datacenter computing, told The Register.
He claimed that the 900GB/s speed is seven times faster than the Gen 5 technology that will be used in the upcoming server chips from Intel and Genoa. He said there was nothing else that matched the speed.
2x higher energy efficiency for the memory subsystem thanks to the use of LPDDR5x and 2x faster memory bandwidth compared to systems currently available in the market are some of the major claims brought by Kharya.
An estimated score of 740 on the SPECrate 2017_int_base benchmark is how a system with the Grace Superchip will perform when it comes to CPUs-bound tasks. If we take their numbers into account, the system would be 50 percent faster than the DGX A 100 system, which uses two 64-core AMD Epyc 7742 processors.
The Grace Superchip was compared to an x86 processor three years ago, and the DGX A 100 is the “top of the line server” available today for artificial intelligence applications.
We love all the innovation that comes to the market from x86 CPUs, and we and our customers are able to take advantage of it, but at the same time we are able to push the boundaries of innovation and fill in the gaps.”
To take advantage of these capabilities, operators and developers need to make a big leap from the comfortable world of x86 systems to the interesting world of Arm server.
It may seem like a big leap, but it’s been done in partnership with Arm to prepare the server software. The expansion of the support for the CUDA programming model was announced back in 2019. More of Nvidia’s software is compatible now.
We’ve been on a constant journey towards that since we announced our project a few years ago. All of our key stacks are supported by Arm, including our Artificial Intelligence platform, the Omniverse platform for digital twins, and the Nvidia HPC platform. We’re working with the entire system to ensure readiness.
Ampere Computing’s Arm-based Altra chips are included in the market now that they are included in the company’s Nvidia-Certified Systems program.
The US Department of Energy’s Los Alamos National Laboratory plans to use both Grace and Grace Hopper Superchips in its next-generation Venado supercomputer, which will be the first of its kind in the world.
As organizations start putting the company’s server designs through their paces, the true test will play out over the next few years.
The upcoming DGX H100 artificial intelligence system will feature the flagship H100 graphics card, and will be powered by Intel’s next-gen Xeon Scalable processor.
During a discussion at the BofA Securities Global Technology Conference Tuesday, Jensen Huang, co-founder and CEO of Nvidia, confirmed that the company uses a certain type of processor. The DGX family of graphics cards is the premier vehicle for its graphics processing units, pre-loading the machines with its software and adjusting them to provide the fastest AI performance as individual systems or in large supercomputer clusters.
Since the DGX system was announced in March, we and other observers have had a question about which x86 server processor would be used.
The DGX H 100 will arrive by the end of the year, and eight H 100 will be based on the Hopper architecture. The chip designer claims that a single system will be capable of delivering 32 petaflops of artificial intelligence performance using its FP8 format by using the fourth-generation NVLink interconnect.
As the company plans to introduce its first Arm-based server CPU, Grace, next year, Huang voiced his continued support for x86 and confirmed the selection of the DGX H100. He said that Nvidia will use a new type of computer.
We buy a lot of computers. We have good partnerships with both Intel and Advanced Micro Devices. For the Hopper generation, I’ve selected the Sapphire Rapids to be the processor, and it has excellent single-threaded performance. We’re trying to get it for hyperscalers all around the world. We’re trying to get it for all the world’s data centers. It’s going to be our own DGX. He said at Tuesday’s event that it would be qualified for our own supercomputers.
The selection of Intel’s upcoming Sapphire Rapids chip, which has already started shipping to some customers, marks a reversal of sorts for Nvidia after it choseAMD’s second-generation Epyc serverCPU, code-named Rome, for its DGX A100 system that was introduced in 2020.
This comes after industry publication ServeTheHome reported in mid-April that Nvidia had designs for both Rapids and Genoa for the DGX H100, but it was not yet known which x86 chip it would use.
While Intel will consider this a victory as the company works to regain technology leadership after years of missteps, it’s a relatively small win when considering the bigger battle over GPUs and other accelerators that is playing out between Nvidia, Intel, and other companies It’s why, for instance, Intel is making a big bet on its upcoming Ponte Vecchio GPUs and whyAMD has pushed to become more competitive against Nvidia with its latest Instinct GPUs.
In order to speed up the flow of data between the two components, Nvidia has decided to build its own Arm-compatibleCPU, so that it can put a CPU and aGPU together in the same package.
The first iteration of this design, called the Grace Hopper Superchip, will be introduced by Nvidia next year, along with a new kind of DGX system that will use Grace. The Falcon Shores XPU will be the first server to feature a CPU-GPU design.
During Tuesday’s talk, he promised that Grace will allow the company to fine-tune everything from the components to the systems to the software. While the Arm-compatible chip is designed to benefit recommender systems and large language models used by hyperscale companies, it will be used for other applications too.
Grace has the advantage in every single application domain that we go into, whether it’s machine learning, cloud gaming, or digital twin simulations, we have all of the ecosystem lined up. In all of the spaces that we’re going to take Grace into, we own the whole stack so we have the opportunity to create the market for it.
The x86 giant has admitted that a broader release of the server chip has been delayed, despite Intel bagging a marquee customer for its next-generation Xeon Scalable processor.
In a Tuesday panel discussion at the BofA Securities 2022 Global Technology Conference, Intel’s datacenter boss confirmed the delay of the Xeon processor. At the same event, the CEO of Nvidia said that the company would not use the upcoming Genoa chip from Advanced Micro Devices in its flagship DGX H100 system.
After falling behind in technology over the past few years, Intel is trying to get back in the game with the introduction of the next generation of Xeons, called the ‘Sapphire Rapids’. With industry-first support for new technologies such as PCIe Gen 5 and Compute Express Link, Intel hopes it will beat the next-generation Epyc chip from Advanced Micro Devices to the market.
There have now been multiple delays at Sapphire Rapids. In June of last year, Intel said it was postponing the production of the chip from the fourth quarter of 2020 to the first quarter of 2022, with plans to ramp up shipments in the second quarter. After several years of delays, Intel was able to make the 10nanometer chip viable for mass production.
Rivera said at the Tuesday event that Intel will start ramping up production of Sapphire Rapids later in the year than they had originally planned. She said the delay was needed for more time for platform and product validation.
She pointed to Intel’s new chips for PCs and laptops as proof that Intel 7 is well-being.
When it is made available, Rivera said that the product will be a “leadership” product. She said that the period during which the title will be held will be shorter than expected because the delay means that the Genoa chip, slated to launch later this year, will now arrive soon.
“We would have liked more of that gap, more of that leadership window for our customers in terms of when we originally forecasted the product to be out and ramping in high volume but because of the additional platform validation that we are doing, that window is a bit shorter.” It depends on where the competition lands.
We would have liked more of that gap, more of that leadership window for our customers in terms of when we originally forecasted the product to be out and ramping in high volume
Rivera said “demand is still very high” for Sapphire Rapids, which server makers and so-called hyperscalers have already received toValidate the chip on their platforms. She admitted that not all customers will move to the same place in one step.
The support for new technologies likeDDR5 and its improvements in performance and total cost of ownership is something that will be taken advantage of by a compute-heavy company like Nvidia.
The ice lake continues to grow. We had record revenue and volume in the fourth quarter of the year. She said that Ice Lake will be the highest volume product as we ramp up the product later in the year.
Rivera said that the follow-up chip will provide a “nice performance boost in terms of the memory, the networking and the overall performance” while fitting into the same socket.
She said that it will make Emerald Rapids easier to upgrade for customers and that it will give them a bigger return on their investment in the platform and innovations.
The financial analyst event that will be streamed on Thursday is likely to get an update on Genoa and future generations of Epyc chips. The rival chip designer said in a May update that it was on track to launch Genoa later in the second half of the year.
It is important to note that there are other threats to Intel in the datacenter. Companies like Amazon Web Services and Ampere Computing are claiming advantages against Intel’s chips with new processors based on Arm’s architecture, and Nvidia plans to debut its first Arm-based server chip in the first half of 2020.
Rivera said Arm’s share in the server market is small. She acknowledged that cloud service providers are interested in chip architectures that offer an alternative to the way Intel’s Xeon chips have traditionally been designed, which is why the Semiconductor giant plans to introduce the Sierra Forest chip in 2024, that uses its efficiency core design.
A lot of those cloud customers that are looking at efficiency core types of workload don’t want all of those additional features. She said they just want high-density throughput, single-threaded performance, and lots and lots of cores for some of the workload.
According to a recently published study, the cloud has come out on top in terms of the performance of the cloud’s compute power.
In tests of performance in the three most popular cloud providers, the multi-core x86-64 microprocessors Milan and Rome beat out the IntelCascade Lake and Ice Lake instances.
The researchers used the CoreMark version 1.0 benchmark, which can be limited to run on a single vCPU or execute workload on multiple vCPUs, to show that the Milan processor was better than the Ice Lake processor.
In the past, we’ve seen Intel lead the pack in overall performance, with AMD competing on price-for-performance metrics. This year, both the overall performance leader and the price-for-performance leader were based on the same processor.
The t2d instance was followed by the n2 standard instance that used Intel Ice Lake processors. “AWS’s large M6i instance, which uses Ice Lake processors, finished third, and other instances rounded out the top ten.” Two of the azure instance types had individual runs that could have broken into the top ten, but when looking at median runs they were less performant than either AWS or GCP’s offerings
All three major cloud providers have the same price-competitive offers, according to a study.
All three clouds were in a statistical dead heat when it came to price. Depending on the requirements of a specific workload, even instance and storage combinations that are a bit more expensive are very competitive.
Storage and transfer costs are more important than the total cost to operate on a given cloud provider according to a database company.
Storage and data transfer can become hidden costs, having a larger impact on total cost than the price of the instances, especially when it comes to building a highly resilient stateful application.
“If there is one point to take away from this year’s report, especially if I were a CIO or CTO building a globally distributed application concerned about cost when picking a cloud provider,” said McClellan. The network transfer cost is where I would focus my attention. Our findings show a light on the total cost to operate.
The recent list of the world’s 500 fastest publicly known supercomputers shows that the chip designer has become a darling among organizations that use x86-based clusters.
The most recent update of Top500 was published on Monday.
Frontier is the world’s first publicly benchmarked exascale supercomputer, achieving a peak performance of 1.1 exaflops, based on the Linpack benchmark used to measure the world’s top systems.
It was only a few years ago that Intel and the DOE said that the Intel-powered Aurora would be the first exascale system in the US, but delays have pushed back the date to later this year. It is thought that these delays caused Intel to change the delivery date from 2021 to 2022 and to remove the mention of Aurora being the first US exascale supercomputer.
As a fun side note, The Register noticed Intel edited its 2019 press release about Aurora to remove the mention of it being "the first exascale supercomputer" and to change the delivery date from 2021 to 2022. You don't often see companies editing old press releases like this. pic.twitter.com/4HqipenMPD
— Semiconductor News by Dylan Martin (@DylanOnChips) May 31, 2022
When considering systems that don’t have publicly submitted benchmark results, Frontier may not actually be the world’s fastest. There are two systems in China that have reached a peak performance of 1.3 exaflops, but the systems’ operators have yet to submit results to Top500.
The company’s CPUs only accounted for six of the world’s fastest 500 supercomputers when it launched its first-gen Epyc chips.
The update that was just released showed that 93 of the top 500 were powered by the same processor from the same company. In the spring of last year, the share of the list was almost double.
The chip designer’s CPUs is present in five of the top 10, 10 of the top 20, 26 of the top 50 and 41 of the top 100.
The x86 giant’s share of the Top500 has fallen to 388 systems from 464 five years ago, with the list’s spring 2022 update bringing the x86 giant’s share below four-fifths of total systems for the first time in nearly ten years.
The company’s CPUs are present in one of the top 10, five of the top 20, 15 of the top 50, and 46 of the top 100, but that’s not all.
One of the things that has helpedAMD gain traction over the last few years is the fact that its Epyc server CPUs have had higher core counts than Intel’s Xeon CPUs, which makes them well-suited for applications that scale well with cores.
This is reflected in the latest Top500 list, with 27 percent of the total cores being from the chip maker. It makes sense that Intel’s CPUs are still present in most systems, as Intel’s cores represent 45 percent of the total.
It’s important to remember that the world of high performance computing is made up of more than just x86 chips. Some of the 19 supercomputers have chips that weren’t designed by Intel orAMD.
The Power chips that power systems in the 4 and 5 spots account for more than half of all cores. Nine percent of all cores are accounted for by the A64FX chips that power Japan’s Fugaku system.
The NEC Vector Engine chips represent a small percentage of cores. The Sunway TaihuLight system is powered by China’s ShenWei chip, which is an important part of it. The first-generation Zen architecture is used by the joint venture in China that uses the hygon Dhyana chip.
168 of the Top500 are using graphics processing units. The fact that most of the world’s fastest systems don’t use such components reflects the fact that high performance applications have taken advantage of them. 157 of them are from the same company. Even though it has only had one system using its graphics cards for the last few years, it’s still close to Nvidia’s share.
There was a slight increase in the Top500’s share of the graphics card market thanks to seven new systems that combine the new Instinct MI250X fromAMD with the third generation of the company’s Epyc chips.
Frontier and two other supercomputers are in the top 10 in the list, and they are among the seven systems with the same architecture. The chips used in these systems are code-named Trento and have an I/O die that allows the two processor types to share memory more easily.
Considering the fact that the number of systems with accelerators continues to increase, which gives Nvidia an opportunity to defend its footprint, it’s clear thatAMD has a way to go before it can take any meaningfulGPU share from them.
There are a few curiosities in the realm of the Top500. There are two systems that still use Intel’s discontinued Xeon Phi accelerators. The National University of Defense Technology in China created the Matrix-2000 accelerator. The “Deep Computing Processor” is used in China.
There are two systems in Japan, both of which use homemade accelerators. The PEZY-SC3 was developed by the country’s PEZY Computing company. The MN-core was developed by Japan’s Preferred Networks.
We should remember that Intel is hungry to make up for the mistakes it has made in the past several years and create more competitive chips again, despite the fact that the latest update from Top500 shows momentum forAMD.
More systems in the future using chips based on Arm and other alternative architectures is a possibility given that Europe and China are increasingly looking at designing their own. Thus, the company shouldn’t get too comfortable.
PC makers who faced shortages earlier this year because of a shortage of Threadripper workstations will finally be able to sell them later this month.
The Ryzen Threadripper Pro 5000 will be made available to leading system integrators in July and to builders through retailers later this year. The Threadripper Pro 5000 will be released by Dell in the summer, almost two weeks after this announcement.
The coming wave of Threadripper Pro 5000 workstations will be an end to the exclusive window thatLenovo had with the high- performance chips since they launched in April.
Smaller companies were experiencing a severe shortage of last-generation Threadripper 3000 CPUs in the first half of 2022, which made matters worse.
There are fewer options for buyers because of the lack of supply between the companies. This was a big deal in the workstation world because of the fact that AMD has been seen as the go-to choice for high-end desktops because of their faster and better capabilities than Intel.
Maingear, Puget Systems, and Velocity Micro told us a few months ago that the Threadripper shortage was slowing down their business and forcing them to recommend Intel-based systems in multiple cases.
The good news is that you’ll be able to add a Threadripper Pro 5000 chip to your board with a BIOS update.
While the expansion of Threadripper Pro 5000 availability is a positive development for workstation vendors and buyers, some industry players knew it was likely to be the death of the non-Pro Threadripper CPUs.
We shouldn’t expect to see a Threadripper 5000 lineup like we did with the 3000 and previous generations, because it’s only Threadripper Pro that will be developed. The chip designer said it did this to serve what the most demanding enthusiasts and content creators want.
In painting the news as a positive development, the company said Threadripper Pro 5000 will give users 128 lanes of Gen 4 connectivity, 8-channel UDIMM and RDIMM support, a massive L3 cache, and management and security features.
The Threadripper Pro parts are more expensive than the non-Pro parts that were more enjoyed by the consumer set, according to the company.
The Threadripper Pro brand was introduced byAMD in 2020 with the Threadripper Pro 3000 chips
The capabilities of these chips were made with professionals in mind, from higher-capacity, error-correcting memory to more than double the PCIe lanes, and they were a branch of the regular Threadripper processors.
The Threadripper Pro chips are more expensive than the non- Pro chips. Last year, Tom’s Hardware noted that the Threadripper Pro 3995WX had a recommended price of $5,489, which was $1,499 higher than the Threadripper 3990X. The 32-core versions cost $750 more.
It’s not the only thing that’s expensive. Puget Systems said that the motherboards can help pump up the price of a system compared to non-Pro Threadripper chips.
In order to accommodate the high number of PCI-Express lanes and memory channels these chips offer, a larger tower Chassis is required. The company tried to explain what was happening with Threadripper in a May post, saying that what used to fit in a mid-tower for a reasonable price now requires a full-tower case and costs thousands of dollars more.
We will grant that Threadripper Pro systems are more affordable than workstations using AMD’s server-grade Epyc chips, but those hoping to build a workstation-ish system on a budget may want to check out the latest high-end consumer CPUs from Intel instead.
The new physical format of the small desktop workstation is smaller than previous designs, but still has the type of performance professional users need.
At the end of this month, the ThinkStation P360 Ultra will be available, but it won’t come with the Xeon chips that we know of.
Many professional users will be pleased by the support for up to eight displays, as well as the ability to use plug-in M.2 cards to store up to 8 terabytes of storage. The price is expected to start at $1,299.
ThinkStation P355 Ultra.
The new system isn’t the smallest one made by Lenovo. The updated version of the ThinkStation P350 Tiny is the 1 liter ThinkStation P360 Tiny. The new form factor has high-end professional components you’d expect from a tower format chassis.
According to the company’s own tests, the new system surpasses previous generation small form factor desktop workstations by over 50 percent.
A compact form factor that can fit both the RTX A5000 graphics card and the cooling needed to support it was developed by the company. The system has an unusual layout with a dual-sided board positioned in the middle of the case, which offers increased air flow to support the processor that runs up to 125W.
Rob Herman claimed that the desktop workstation was purpose built to deliver impressive performance in a space-saving form factor that our customers need.
The ThinkStation P360 Ultra is tested to pass the most demanding standards for ruggedized hardware, but it is not the most demanding.
To cater to customers in Europe, Middle East, and Africa, Lenovo has opened a manufacturing facility in Europe.
The new facility is for manufacturing. The picture is ofLenovo
The central location within Europe and strong infrastructure made ll the ideal location for the factory.
The Hungarian Investment Promotion Agency supports part of the investment by Lenovo. The lower wage structure in the country may have played a role in the selection process.
The site employs over 1,000 full-time staff in a range of engineering, management and operational roles, as the facility moves towards full capacity.
The initial plan was to open the new factory in Spring 2021.
Francois Bornibus welcomed the opening, saying the company had reached a “milestone” in the evolution of its manufacturing network, which includes a mix of both in-house and contract manufacturing.
“Hungary’s well connected location puts us closer to our European customers so that we can fulfill and sustain their needs while remaining at the forefront of innovation,” he said. “As our business continues to grow around the world, this incredible new facility will play a key role in our plans to ensure future success and bring smarter technology for all to Europe more quickly, cheaply and efficiently.”
The new site is one of the largest. The production lines are said to be capable of making more than 1,000 server and 4,000 workstations a day, with each being built to customer specifications.
A building management system was built into the factory to monitor temperature, humidity, and asset conditions.
The new factory was fitted with solar panels that could provide half a megawatt of energy, or enough to power the equivalent of a small village, because of the environmental concerns. Combined with new manufacturing processes, such as a patented low- temperature solder process, this will contribute to achieving climate goals.
Since buying IBM’s X86 server business, the infrastructure group has reported its first annual profits.
Quarterly revenue for Q4 was $16.7 billion, a 7 percent increase over the same quarter in the previous year, while annual revenue was $71.6 billion, an 18 percent increase, and annual net income was $2 billion.
The range of portable workstations has been halved.
The ThinkPad P16 is from the Chinese PC giant. The Register confirmed that the P15 and P17 are to be retired.
All the way to 16-core i9 models, the P16 machine has Intel 12th Gen HX CPUs. It’s an option to have a graphics card from the company.
Storage can reach 8 terabytes and memory can reach 128 gigabytes. The machine appears to have a singleUSB-A,USB-C, andHDMI port.
The machine is built to combine the best features of the P15 and P17 into an all new compact and improved form factor, and the graphic below shows that combination.
There is a portable workstation plan byLenovo.
The Register inquired about the meaning of that graphic, and was told that it means one portable workstations from Lenovo.
The P17 has an Intel Xeon processor and a 17-inch screen, but neither is offered by the P16, as shown in the graphic. The replacement for the Xeon is the new HX Silicon from Intel. Mobile workstation users can’t get Intel’s best offering in the class because they don’t cater to the Core i9 HX. P17 owners will be deprived of a little screen real estate when they move to a 16-inch screen. P15 users need a bigger machine.
Linux wasn’t offered as a pre-installed option on the P15 and it wasn’t able to scale up to the P16’s storage and memory capacities.
It looks like the P16 is a combo of the two previous mobile workstation offerings.
We don’t know why it has decided to reduce its range, but a likely reason is that mobile workstations are not a high volume product so the company can’t sustain two models. If we receive substantive information from Lenovo, we’ll update this story.
There was an update at 20:45 on May 20th. The laptop’s specifications have been corrected in the article.
Intel wants 593m in interest charges for successfully appealing Europe’s antitrust fine.
After years of fighting the fine, the x86 chip giant was told it didn’t have to pay up after all. The US tech titan is trying to get damages for being screwed around by the EU.
According to official documents published on Monday, Intel has gone to the EU General Court for payment of compensation and consequential interest for the damage done because of the European Commissions refusal to pay Intel default interest.
An analysis of the European Central Bank’s refinancing rate, which was 1.25 percent when the penalty was approved in 2009, shows that Intel is owed more than half the value of the fine in interest.
Intel wants the court to impose additional interest on late payment of charges moving forward.
The same Intel that wanted subsidies to build a factory in Germany is here.
The European Commission and the chip goliath have been in a fight over anticompetitive conduct.
Intel gave its hardware partners incentives to use its x86 processors, which put its rivals at a disadvantage. The rebates were given to major computer makers by Intel for using its chips over those of the other company. Intel was accused of paying a German electronics retailer not to sell computers with competitors’ components.
A five-year investigation concluded in 2009, and found that Intel engaged in anticompetitive behavior between October 2002 and December 2007, and that it was fined one of the largest ever.
The result of Intel’s anticompetitive conduct was a reduction of consumer choice and lower incentives to innovate.
The fine was appealed by Intel multiple times, once unsuccessfully. The European Court of Justice sent the case back to the General Court after the chipmaker brought it to them.
In January this year, after more than a decade of debate, the court sided with Intel, calling the commission’s analysis incomplete and saying it had failed to establish a legal standard that thebates at issue were capable of having, or were likely to have, anticompetitive effects.
The saga is not done. The European Commission said in April that it would appeal the court decision. That appeal is still going on.
According to a report, Intel is set to receive $7.3 billion in subsidies for a massive chip manufacturing campus it’s planning in Germany, and the x86 giant won’t have to worry about TSMC setting up shop somewhere nearby for the time being.
According to local media, Martin Krber, the city’s representative in the Bundestag, disclosed last week the German subsidies for Intel’s planned wafer fabrication site. According to Krber, the federal government has allocated over two billion dollars to the project.
According to Germany’sDeutsche Presse-Agentur, the government is discussing the possibility of subsidies for other projects.
The news is likely to be of some relief to Intel CEO Pat Gelsinger, who has been begging the US Congress to pass chip subsidies in America for its planned chips in Ohio and Arizona. The House of Representatives and the Senate have been working on the CHIPS for America Act since May.
Forty percent of the initial 17 billion that Intel plans to spend on the mega-site will be covered by Germany. The project is part of a larger 33 billion investment in Europe planned by the American chipmaker which will include an R&D and design hub in France as well as manufacturing, foundry and chip packaging service operations in Italy, Poland and Spain.
In the first phase of the project, Intel’s massive site in Magdeburg will consist of two neighboring Fabs that will occupy the space of two football fields. Tens of thousands of additional jobs at suppliers and partners are projected to be created by the chipmaker’s campus, as well as 3,000 permanent high-tech jobs for the company.
In the first half of the 20th century, the plants are expected to begin manufacturing chips using an advanced node.
The same can’t be said for Taiwan’s TSMC, the world’s largest contract chip manufacturer that makes chips for companies like Apple, and Intel.
The chairman of TSMC said on Wednesday that the company has “relatively fewer customers” in Europe, and that it has no concrete plans to open a plant there.
A year ago, TSMC said it was in the early stages of considering an expansion into Germany, but the statement shows that the chipmaker hasn’t made much progress.
EU officials have been working with Taiwan’s government to lure the island nation’s chipmakers to Europe.
The effort is part of the EU’s proposed European Chips Act, which was revealed in February to boost the bloc’s competitiveness and resilience in chips while also supporting digital transformation and environmental goals.
Last week, Taiwan’s Ministry of Economic Affairs announced a “major breakthrough” in talks with the EU about cooperation in the Semiconductor industry, which could pave the way for Taiwanese chipmakers to build new facilities in Europe.
It seems that the EU shouldn’t count on TSMC and instead look at Taiwan’s other foundries, such as UMC and PSMC, which have less advanced manufacturing capabilities.
If the United States and its allies impose sanctions against the Middle Kingdom like they have against Russia, China should seize Taiwan to gain control of TSMC.
Last year the US suggested that Taiwan should destroy its chip factories if China invaded.
The China Center for International Economic Exchanges’ chief economist, Chen Wenling, delivered a speech at the China-US Forum at Renmin University of China in May. The speech’s text was posted on the online news site.
A confrontation between the US and China would be a disaster for mankind, Chen said in the speech.
She claimed that the US was trying to create two large “anti-China” trade bodies, although the US pulled out of the Trans-Pacific Partnership.
According to a translation of the text, Chen said that China needs to take steps to secure its industrial chain and supply chain and make strategic preparations to deal with the United States’ insistence on breaking it.
If the US and allies impose sanctions on China like they did against Russia, China must recover Taiwan and “seize TSMC, a company that originally belonged to China”
Chen claimed that they are building six factories in the United States. “We must not let all of the goals of the transfer be achieved.” is a possible reference to the US CHIPS Act, which seeks to encourage the building of Semiconductor Fabrication Plants on US soil, and which may include funding going to TSMC for chipmaking facilities it is building in Arizona
Chen’s speech suggests that China should only take this action as a response to threats against its economic security, and there is no reason to believe that China will follow in Russia’s footsteps.
If the Taiwanese government adopted the scorched-earth policy proposed by the US Army War College last year, any attempt by China to seize Taiwan would be futile.
Taiwan’s best deterrent against potential Chinese aggression is to put in place a credible strategy to destroy its manufacturing facilities if an invasion were to occur, which would deprive China of the supply of much of its chips. Semiconductor Manufacturing International Company (SMIC) is a Chinese chipmaker with facilities on the island.
Taiwan accounts for a large part of the world’s chip manufacturing capacity and is seen as vital by the US and China. The island has 48 per cent of the global foundry market and 61 per cent of the world’s capacity to fabricate chips using a 16nanometer process.
China, which last year produced only one in six of the chips that its industries used, despite setting an ambitious goal to be 70 percent self-sufficient, is an example of this.
TSMC reported revenue of $18.6 billion in the first quarter of 2022, a 36 percent increase over the same quarter last year. High demand from the automotive and high performance computing markets will continue to drive sales for the current quarter, according to the company.