IT decision makers taking part in the survey have a suggestion for bosses who are feeling the effects of the Great Resignation, go remote and hire people.

After all of the benefits of working from home became apparent, businesses began to rethink the structure of how their entire company works.

Offering remote work is having a positive effect on hiring according to a report by Foundry. The work-from- home shift has made it easier to find open positions, according to 42 percent of the survey respondents. 64 percent of companies have committed to being permanently remote.

When faced with employee revolt, some organizations that insist on a return to the office have been roundly criticized.

The hybrid model is the future of work, according to the study.

70 percent of respondents said there had been a positive shift in their organization toward supporting remote work, 69 percent said it’s caused them to change how they plan office space and staffing, and 62 percent said they considered revising processes and workflows.

IT equipment giants, such as Dell, which have seen infrastructure purchases continue to increase, have seen remote work as a boon. Businesses are spending more on security and network infrastructure to keep a remote workforce.

Someone working in an office while wearing a mask READ MORE

A switch to permanent remote work is opening up a few new issues, but most people are happy working in their slippers. Thirty-two percent of respondents said that they were concerned aboutproximity bias, which could leave remote workers with worse odds of career advancement over their face-to-face colleagues.

Some people are worried that hybrid or remote work can have a negative impact on diversity, equality, and inclusion efforts.

Workers in the US shouldn’t get their hopes up about the four-day workweek, as most of the responses came from there, according to Foundry. Our informal poll found that 84 percent of Reg readers supported the four-day workweek, but 60 percent of the respondents did not.

Sponsored: 5G delivers smart city transportation in Guangzhou

Page 2

Despite the poor economic outlook, the UK’s technology sector is still hiring and increasing salaries.

In the first quarter of this year, 52 percent of companies in all sectors said they would increase recruitment, but in the second quarter that number is down to 41 percent. The labor market outside of the tech sector is starting to cool, according to a survey that found a drop in the number of businesses planning to recruit more staff over the next six months.

The impact of the war in Ukraine, the hike in inflation, ongoing supply chain issues and weakened customer demand are some of the reasons the company cited.

According to the Office for National Statistics, there were 76,000 vacancies in the information and communication sector in the first quarter of 2022, up from 69,000 in the final quarter of the previous year.

The number of new tech companies incorporated in the UK increased by 62 percent during the year.

A survey of 20 tracker questions was given to 700 senior executives of middle market companies. The survey asked about the current and future state of business.

Mid-market companies have annual revenues of between £10 million to £750 million. The UK has around 33,000 organizations.

labor shortages in the media and tech sectors are due to the relatively new skillsets the industries need, according to an economist.

It will take time for those skills to become more commonplace in the economy. It is important for the sector to invest in training and upskilling their people rather than trying to recruit more from a very limited pool.

The tech sector in the UK has increased salaries by 36 percent since 2015, according to the consultants.

While a shortage of talent has meant that salaries for programmers and IT specialists are at a premium, offshore IT support from areas of the world where costs are usually lower are beginning to match rates in the UK.

Page 3

The four-day week is gaining steam as return-to-office attempts fail for large tech businesses.

The world’s biggest trial program began this week in the UK, with participants paying their employees a regular week’s pay for 80 percent of labor. The pilot may be the largest, but it is not the only one.

Dell recently switched to a four-day week in the Netherlands after trialing it in Argentina, which is one of the reasons they decided to trial it on their own.

Bolt adopted the four-day week in January, after the team at Kickstarter was put on four-day weeks in March.

Thousands of workers from across the country are taking part in a four-day week trial in the UK from June to December this year, including some from Canon’s UK arm.

There’s mounting evidence that four-day work weeks make employees more productive. David Simpson wrote that the number of hours spent at our desks doesn’t correlate with our happiness and productivity.

It’s hard to ignore the constant headlines about the changing nature of work since the coronavirus epidemic. Most of the news has centered on return-to-office initiatives and their failures, but a four-day week had been a topic of discussion for a long time.

A trial of the four-day work week in Japan resulted in a 40 percent increase in productivity. The four-day week was thrust into the spotlight at the same time as returning to the office became a hot topic when measured results were not published until 2021.

The 4 Day Week Global initiative is a non-profit convern that is one of the organizations behind the current six-month four-day work week trial in Britain. There are pilot programs in the US, Canada, the UK, Ireland, Australia and New Zealand, as well as support for companies considering the shift.

According to 4 Day Week Global’s research, job performance was maintained with a four-day work week, stress levels dropped and work/life balance improved, rising from 54 percent to 78 percent who were satisfied with theirs.

63 percent of businesses surveyed by 4 Day Week Global found it easier to attract and retain employees after changing to a four-day work week.

“You can be 100 percent productive in 80 percent of the time in many workplaces, and companies adopting this around the world have shown that,” said Juliet Schor, lead researcher and Boston College economist. Workers in industries are discovering that their days are filled with unnecessary activities that can easily be cut without hurting the business.

Schor said that industries like health care and teaching, where staff is already stretched thin, wouldn’t be able to adapt to a four day week.

There is a challenge of operational transformation A four-day week experiment could further burden those with an already overloading schedule if it is poorly executed.

“You would have to become 25 percent more productive per day to justify the four-day work week,” said the Institute of Economic Affairs fellow.

Page 4

The executive staff at the car company will not work from afar, according to Musk.

In an email obtained by the New York Times, Musk told the executives that remote work was no longer acceptable.

Anyone wanting to work from home must be in the office for at least 40 hours per week. This isn’t as much as we ask of factory workers.

If you have to ask, that’s probably not you, because Musk allows that he may, at his discretion, bend his rules for “particularly exceptional contributors.” “office” as he defines it means main office, not some remote branch unrelated to one’s duties, is what the billionaire poly-boss says.

The company did not immediately respond to a request to confirm the authenticity of the directive. Musk did not challenge its authenticity when he responded to the Whole Mars Catalog’s request, as he outlined in his memo, to offer his thoughts on people who think coming into work is an antiquated concept.

Musk said that they should pretend to work somewhere else.

The Register asked Musk if he would work from home or if he would spend his CEO hours in the office. We don’t know what to expect.

According to a Bernstein analyst, Musk’s marching orders are unlikely to reduce executive turnover at the company, which was estimated to be 44 percent in 2019. The average of 9 percent at other Silicon Valley companies is lower.

Retention might become worse because of the ” olly olly oxen free” memo. The majority of workers have accepted remote work as a necessity at the start of the COVID-19 epidemic. 66 percent of job candidates are unwilling to return to the office according to Robert Half. According to a survey of 1,000 US hiring managers conducted by Upwork last year, 40.7 million American professionals are expected to be completely remote in the next five years.

Other Silicon Valley firms give their workers more flexibility than they do. Last month, the company declared that employees could work anywhere.

It doesn’t look good when companies don’t give employees more flexibility. IBM’s decision to require staff to work from one of its main offices was seen by company critics as a way to encourage older workers to leave. The company’s ban on working from home did little to improve its image.

Musk’s interest in selling cars meshes with the idea that people who don’t commute to work may be less inclined to buy a car.

Page 5

On Wednesday, Musk took his case to the US Court of Appeals after a lower court denied his request to overturn the SEC settlement.

When the CEO of the company said he was thinking of taking the company private at $420 a share, he was in hot water with the watchdog. Musk didn’t have the funding or approval to do that. The stock price went up 10 per cent when investors started buying more shares.

The SEC accused Musk of fraud for misleading the public and causing a disruption in the market. After being sued by the US regulators, Musk agreed to pay $40 million in penalties, step down as chairman of the automaker’s board, and agree to not speak about the company on social media for a year.

He wants to end the last part of the agreement. Musk’s legal team argued that the SEC doesn’t have the authority to control his free speech, and that it’s unfair for the watchdog to allow “roving and investigations” into Musk’s activities.

The request was denied by a federal judge in New York. “Musk was not forced to enter into the consent decree; rather, ‘for his own strategic purposes,’ Musk, with the advice and assistance of counsel, entered into these agreements voluntarily, in order to secure the benefits, including finality,'” district court judge Lewis Li wrote.

Once the spectre of the litigation is a distant memory and Musk’s company has become, in his estimation, all but invincible, he cannot seek to withdraw the agreement he knowingly and willingly entered.

Musk’s lawyers filed their intention to take the case to the Court of Appeals today. The case is not clear on how it will move forward.

Musk loves to talk about free speech or his definition of it, on and about the social network, so we’re reminded that he’s going to take questions from the workers on Thursday. He wants to free himself of his commitment to the SEC.

The SEC has been asked to comment by the Register.

Page 6

After allowing a $12.5 billion margin loan against his stock to expire, Musk must personally secure $33.5 billion to fund his $44 billion purchase of the micro-publishing service.

The original $27.3 billion in equity financing was increased by an additional $6.25 billion, according to regulatory filings.

He had to provide $21 billion in equity and $12.5 billion in margin loans in order to purchase the social network. On May 5, the margin loan was dropped to $6.25 billion, and this additional financing would eliminate it.

It was suggested that the world’s richest man could walk out on the deal unless there was proof of the number of bot accounts on the social media platform.

Some thought he was trying to bring the price down after questioning the “less than 5 percent” claim.

Following Musk’s attempt to buy Twitter, the stock ofTesla has taken a beating. The electric car company has lost 25 percent of its value since the takeover was agreed, with investors worried about slower growth, rising inflation and interest rates that are edging ever higher.

The original margin loan agreement had a value of $12.5 billion before Musk took over.

Abandoning the plan will relieve pressure on the company.

The Financial Times said that Musk is trying to lower the funds he needs to power the deal by courting additional investors like Jack Dorsey.

At the time of writing, the share price of the company is $37.16. The offer is worth $54.20 a share.

Musk suggested changing the platform’s subscription service, banning advertising, and giving an option to pay in cryptocurrencies, in order to make the platform less censorious.

He wants to reverse Donald Trump’s permanent ban, which was brought after supporters of the former president went to the Capitol on January 6 last year to protest the election of Biden.

The ban was described as morally bad and foolish by Musk at the “Future of the Car” conference.

Musk said that banning Trump from the social networking site didn’t end his voice. It will increase it among the right. This is what makes it morally wrong and stupid.

The analysts are not sure if the deal will be completed.

Page 7

Lawyers for Musk said on Monday that he is prepared to end his takeover of the social media site because it is covering up the number of fake bot accounts.

In April, Musk offered to acquire Twitter for $54.20 per share in an all-cash deal that was worth over $44 billion. The company’s board members resisted his attempt to take the company private. Musk sold $8.4 billion of his shares in the company and secured another $7.14 billion from investors to try and collect $21 billion he promised to front himself.

Morgan Stanley, Bank of America, and others promised to loan the remaining $25.5 billion. The takeover was imminent as rumors swirled that Musk wanted to take the company public in a future IPO. But the tech billionaire got cold feet and started backing away from the deal last month, claiming it couldn’t go forward unless it proved fake accounts make up less than five per cent of all users.

The issue has been taken further by Musk. His lawyers wrote a letter to the chief legal officer of the company stating that his client was willing to pull out of the deal over the disagreement on fake accounts.

Musk agreed to pay a $1 billion breakup fee if he walked away from the takeover, if he decided to. This latest letter could be an attempt to wriggle out paying that fee, angling for a lower price tag on the business or just ending it all.

The missive was disclosed to and published by the SEC and stated that Mr Musk believed the company was resisting and obstructing his information rights under the merger agreement.

“This is a clear violation of Twitter’s obligations under the merger agreement and Mr Musk has the right to not complete the transaction and to end the merger agreement.”

Meanwhile. Texas Attorney General Ken Paxton, who denies charges of securities fraud against him, said Monday that he is looking into whether the website has broken the US state’s Deceptive Trade Practices Act by misleading people on the number of bot on the social network.

According to a statement from Paxton’s office, “Twitter has received intense scrutiny in recent weeks over claiming in its financial regulatory filings that fewer than 5 percent of all users are bots, when they may in fact comprise 20 percent or more.” It’s possible that the difference could affect the cost to Texas consumers.

He said it was not easy to come up with an exact figure. In its most recent quarter of financial results, the average international monthly active user was 189.4 million, an increase of 18.1% compared to the same period of the previous year.

Musk thinks the numbers are not correct. He thinks there are more bot accounts than they say. Musk’s lawyers claimed that attempts to get more data from the company have been futile. According to the letter, Mr Musk has been requesting information from the company for over a year in order to evaluate fake accounts on the platform.

The company’s latest offer to simply provide additional details regarding the company’s own testing methodologies, whether through written materials or verbal explanations, is equivalent to refusing Mr. Musk’s data requests. It is only an attempt to obfuscate and confuse the issue. Mr. Musk has made it clear that he doesn’t believe the company’s testing methodologies are adequate so he needs to conduct his own analysis. The data he has requested is necessary to do that.

The company is still pushing to close the deal and hit back at the claims, according to a statement to The Register.

We will continue to share information with Mr Musk in order to complete the transaction. This agreement is in the best interests of all shareholders, we believe. The representative told us that they intend to close the transaction and enforce the merger agreement.

Page 8

It has been a good week for free speech advocates as a judge ruled that copyrighted law cannot be used to circumvent anonymity protections.

The decision from the US District Court for the Northern District of California overturns a previous ruling that compelled the social media site to reveal the identity of a user accused of violating theDMCA.

According to the EFF, the ruling confirms that the Constitution’s test before identifying anonymous speakers must still be met.

The case involves an anonymous account that made critical statements about wealthy people like Jeff Bezos and Nancy Pelosi. In 2020 the account posted photos of an alleged partner of a private equity billionaire and accused him of having an extramarital affair.

After the photos were published, a company that claimed to own the rights to them filed a request with the social network. The company went to court to get the name of the person who was behind the account.

The judge in the case said that a two-step inquiry must be conducted to determine whether a request for the identity of a speaker should be granted.

First, the party seeking the disclosure needs to show a strong case for its underlying claim. The court balances the need for discovery against the First Amendment interest at stake, as well as explaining how the entity that filed the claims failed to show proof of either.

Significant First Amendment interests are at stake according to the court in the second instance. Brian Sheth, the private equity tycoon at the center of the case, had to have his anonymity protected because of potential retaliation.

The anonymity of its users has been the subject of many legal fights. In 2010 it went to court to fight a Pennsylvania grand jury subpoena request to turn over the identities of two users who had made critical comments about Pennsylvania’s Attorney General.

The company accidentally gave personal information about its users to advertisers, including email addresses and phone numbers used to register accounts. It is possible that your online privacy mileage varies.

This is a specific example and sets the stage for future rulings across other platforms.

Page 9

If a trio of Senate Democrats get their way, the US could have a law like the EU’s universal charger mandate.

The proliferation of charging standards has created a messy situation for consumers, as well as being an environmental risk, according to a letter to the Commerce secretary.

As specialized chargers become obsolete, or as consumers change the brand of phone or device that they use, their outdated chargers are usually just thrown away,” the senators wrote. According to the European Commission, more than 11,000 tons of e-waste is created annually by discarded and unused chargers.

The senators argued consumers are also negatively affected. 40 percent of consumers reported that they were unable to charge their device due to not having the right cable, despite the fact that the average consumer owns three mobile phone charging points.

The EU’s response to the proliferation of standards was to shut the door on its single market to anyone not using the standard. The policy was voted into law by the Commissioners early this month, giving tech companies 24 months to adjust.

The iMaker is definitely a central subject, even though laws of this type don’t single out Apple. Speaking to reporters at a press conference announcing the new law, Maltese MP Alex Agius Saliba said that the rule applies to everyone and called Apple out.

“Apple has to follow the rules,” she said.

In the second half of the 20th century, Apple may dump the Lightning port used in its phones in favor of the USB-C port, as predicted by an Apple analyst in May. He said this could speed transfer and charging rates.

The EU’s law requires all electronics, including cameras, phones, tablets, earbuds, speakers, and the like, to have a port that can deliver power. The iPhone is the only device in the world that doesn’t have a Lightning port.

“We urge the Department of Commerce to follow the EU’s lead and develop a comprehensive strategy to address unnecessary consumer costs, mitigate e-waste, and restore sanity and certainty to the process of purchasing new electronics,” the senators’ letter read.

It’s not a stretch to say most people would be happy to ditch cable spiderwebs and cluttered drawers for a fewusb-c cords, but it’s an unstandardized mess. Some cables are designed to only deliver power, not data, and others are designed to only deliver power. The ports can only be configured for power.

If the Department of Commerce takes action, it will need to be very precise to prevent USB-C from becoming another addition to a confusing maze where the wrong cable is still the wrong cable, but with the added fact that they all look the same.

Page 10

The inability of Congress to pass pending legislation that includes tens of billions of dollars in subsidies to boost Semiconductor Manufacturing and R&D in the country has caused Big Tech in America to have enough.

In a letter [ PDF ] sent to Senate and House leaders Wednesday, the CEOs of Amazon, Dell, IBM, Microsoft, and dozens of other tech and tech-adjacent companies urged the two chambers of Congress to reach consensus on a long-stalled bill.

The rest of the world does not want the US to act. It is imperative that Congress act to enhance US competitiveness, as our global competitors are investing in their industry, their workers, and their economies.

The Semiconductor Industry Association organized the missive and it was signed by top executives in the industry, including Lisa Su, Intel CEO Pat Gelsinger, and GlobalFoundries CEO Thomas Caufield.

The association said it hopes the final legislation will include a measure for investment tax credits that Semiconductor manufacturing and design companies can advantage of in addition to the $52 billion in chip subsidies that has been the heart of the bill

Tech executives are frustrated after the US competitiveness bill has been stuck in Congress for months. The House of Representatives passed its version of the legislation in February while the Senate passed its version in June 2021, but the Senate and House have been trying to reconcile differences in their respective chip subsidies bills recently.

“We’ve wasted several quarters since the Senate acted last year, and now it’s time for us to move forward rapidly,” Intel’s Gelsinger told Congress in March.

In the past few decades, the US has been behind Asian countries in the field of chip manufacturing. The US share of chipmaking dropped from 37 percent in 1990 to 12 percent today. 80% of chip production occurs in Asia.

Tech companies and government officials have pushed for chip subsidies because of a number of reasons: fighting against future chip shortages and inflation; reducing reliance on chipmakers in Asia; and hedging against future geopolitical instability.

The legislation risks collapsing in Congress in the face of increased skepticism from Republicans, as well as the fact that the country is facing other issues, like the seemingly never-ending problem of gun violence

Some Democrats and Republicans are worried that the White House hasn’t done enough to get Congress to support the bill. The private sector hasn’t done enough to inform politicians of the importance of passing the bill, according to White House officials.

The Semiconductor Industry Association was able to get executives at more than 120 companies to sign a letter on Wednesday. The letter is not very long, but it got to the point in the second paragraph.

While many of the signatories of the Wednesday letter represent US companies, there are a few foreign firms represented too, most notably TSMC and SAMSUNG.

Taiwan-based TSMC and South Korea’s SAMSUNG are in the process of building new manufacturing plants in Arizona and Texas in order to get their share of the US chip subsidy.

The companies, which have benefited from generous support in their home countries, spoke out in March about the need for the US to consider foreign firms when giving out chip cash Concerns were made after Intel proposed that the funding only be used for domestic companies, a matter that the x86 giant has since become silent on.

“Arbitrary favoritism and preferential treatment based on the location of a company’s headquarters is not an efficient use of the grant and ignores the reality of public ownership for most of the leading Semiconductor companies,” TSMC said in a statement to the US Department of Commerce.

Many foreign and domestic companies hope that the US will use taxpayer dollars to boost chip manufacturing and research. It’s been difficult to make the bill a priority because of several issues facing the US, including gun violence, inflation and attempts to subvert democracy.

Page 11

You probably know that Intel made major manufacturing mistakes over the past several years, giving rivals likeAMD a significant advantage, and now the x86 giant is in the midst of an ambitious five-year plan to regain its chip-making mojo.

This week, Intel is expected to detail just how it’s going to make chips in the near future that are faster, less costly and more reliable, from a manufacturing standpoint, at a Symposium on VLSI Technology and Circuits. The Register and other media outlets were given a sneak peek at the event.

The chipmaker’s 7nm process was known as Intel 4 before that. The compute tiles for the Meteor Lake CPUs for PCs and the Granite Rapids server chips will be used by Intel in products entering the market in the next year.

Intel has promised that Intel 4 will deliver a 20 percent improvement in performance-per- watt over Intel 7.

Ben Sell, the executive in charge of Intel 4 development, said in the briefing that his team has been able to achieve a 21.5 percent improvement in performance for Intel 4 over Intel 7 at the same power. The same level of frequencies can be provided by Intel 4 using 40 percent less power.

We always hope for better efficiency with new chips, but this means that future chips like Meteor Lake will have better performance and efficiency. It is possible to improve how long a laptop battery lasts or how much power a PC or server requires.

The progress on Intel is very positive. “It’s where we want to be,” said Sell, who is vice president of technology development.

One advancement Sell’s team made to boost the frequency of Intel 4 is a 2x increase of the metal-insulator-metal capacitor, which is a building block Intel has used for chips since the 14nm process that debuted in 2014 with the Broadwell CPUs.

Increased capacitance results in fewer large voltage swings, which in turn increases the available voltage to theCPU and allows it to run at a higher Frequency.

He said that what they have seen on products is that they translate to a higher Frequency that you can run the product at.

Making the chip-making process more reliable, reducing the costs, and improving performance are some of the things that are important for a new manufacturing area. Sell said his team has made good progress thanks to Intel 4’s use of EUV lithography, an advanced process that uses extreme ultraviolet light to etch a chip design onto Silicon.

Sell says that the EUV has allowed Intel to simplify the process. This means that Intel can cut the number of layers required to etch designs onto wafers from five to one.

He said “everything now can be printed with a single layer to give you exactly the same structure.”

Sell told us that using EUV results in improved manufacturing yield and that the number of wafers with defects will go down when new chips enter production.

Even though EUV is expensive, it will lower Intel’s chip-making costs for products using Intel 4. Sell says EUV reduces the number of steps and tools needed to make chips.

A lot of the other tools that we have in our factory are not needed once you combine everything to a single step, he said.

He thinks that this simplified process could allow Intel to increase its production capacity.

You also get less demand in terms of space that you need. Sell said that either you need to build fewer fabs, or you can get more output of each one.

Intel is taking a more modular approach to how it develops new nodes. This is a big change from the chipmaker’s previous approach, which resulted in major delays and gaffes with the 10nanometer and 7nanometer chips.

Rather than having one giant step, we’re going into a much more modular approach, which means you have a few smaller steps and a few modules in the process. It is easier to get each module developed in time, without the complexity of having everything else solved to understand this module.

Page 12

The tech giant has admitted that the new M2 chip isn’t as good as its predecessor and that it isn’t as good as the latest from Intel.

Apple focused its high-level sales pitch for the M2 on claims that it is more power efficient than Intel’s latest laptop CPUs. At least for now, Intel has it beat when it comes to the performance of the CPU.

Johny Srouji, Apple’s senior vice president of hardware technologies, said during the presentation that the M2’s eight-core CPUs will provide 87 percent of the peak performance of Intel’s 12-core Core i7-1260P while using just one core.

A graph showing that Apple's M2 provides 87 percent of the peak power of Intel's 12-core Core i7-1260P chip while using a quarter of the power.

The M2 is more power efficient than the Apple’s CPUs. To enlarge, click.

In other words, Intel’s Core i7-1260P is 15 percent faster than Apple’s M2, and that’s not even considering the fact that Intel has two more powerful i7s in its P-Series lineup.

The company claimed that the M2’s processor is 1.9x faster than Intel’s 10-core Core i7-1255U while using the same amount of power, but the fact is that Apple doesn’t have a CPUs for ultra-fast.

The original argument Apple made when the M1 was released in 2020 was that performance-per-watt was the most important metric.

Unlike others in the industry, our approach is different. Power efficient performance is a constant focus of ours. Srouji said, “Maximizing performance while minimizing power consumption.”

Performance-per- watt isn’t the only way Apple hopes the M2 will stand out when it lands in the MacBook Air and MacBook Pro.

The tech giant is making a bigger bet on the chip’s neural engine because it believes an increasing share of applications in the future will be rely on graphics and artificial intelligence, according to a veteran Semiconductor analyst.

An image showing the specs of Apple's new M2 chip.

The M2 is still an impressive chip, especially its engine. To enlarge, click.

Apple’s decision to dedicate more transistors for the M2’s 10-core GPU and 16-core neural engine compared to the M1 is a reflection of this. The design decisions allowed the Mac maker to claim a 35 percent boost for theGPU and 40 percent boost for the neural engine compared to the M1. According to Apple, the M2’s processor only improved by 18 percent.

It’s important to put more weight on theGPU and neural engine since they could make a bigger difference since applications that are heavily reliant on theCPU, like web browsers, don’t have as great of a need for faster chips.

Web browsers don’t need a lot of performance, so the comparison on industry performance is probably less relevant in my mind. According to The Register, Apple wants to show they are competitive with Intel, and that they may be ahead of the game with neural processing and better graphics.

A graph from Apple showing that the M2's GPU is 2.3x faster than Intel's Core i7-1255U.

If you believe Apple’s claims, the M2’s graphics card is good. You can click to enlarge.

While Apple didn’t provide a competitive comparison for the M2’s neural engine, it did claim that the 10-core GPU is 2.3 times faster than the integrated graphics within Intel’s Core i7-1255U while using the same power. The peak performance of the i7-1255U can be provided by the M2’s graphics card while using less power than the i7-1255U. The i7-1260P has a faster built-in graphics card than the i7-1255U, but Apple didn’t compare it to the i7-1255U.

By its own admission, Apple may not have the fastest processor in the industry. The trend in the compute world is that having a faster central brain may not be as important as having dedicated accelerators for more important areas like graphics and artificial intelligence.

Page 13

After taking serious market share from Intel over the last few years,AMD has revealed larger ambitions in artificial intelligence, datacenters, and other areas with an expanded roadmap of CPUs, GPUs, and other kinds of chips for the near future

A renewed focus on building better and faster chips for server and other devices, as well as becoming a bigger player in artificial intelligence, were some of the ambitions laid out by the company at its Financial analyst day event on Thursday.

“These are the areas where we think we can win in terms of differentiating ourselves,” Lisa Su said in her opening remarks. It’s about leadership in technology. It is about expanding leadership in the data center. It’s about expanding our capabilities. We are expanding our software capability. We think this is a growth area going forward, so it’s bringing together a broader custom solutions effort.

At the event, the company revealed new plans for its server and client chips, as well as plans to introduce a new kind of hybrid chip for datacenters, with the aim of integrating artificial intelligence into its products in the future.

The company also announced plans for a unified software interface for programming artificial intelligence applications on different kinds of chips, which is similar to Intel’s OneAPI toolkits.

The chip designer wants to expand its custom chip business into new areas like hyperscale datacenters, automotive and 5G.

We already have a broad high- performance portfolio. The leading industry platform for chiplets is already in place. We’re going to make it easier to add third-party intellectual property to that platform.

The upcoming chips will be based on the Zen 4 architecture and include the Ryzen 7000 desktop chips and the Genoa server chips.

The next-gen Zen 5 architecture will arrive in 2024 with machine learning and integrated artificial intelligence, along with enhanced performance and efficiency.

Zen 4 will be the first high- performance x86 architecture to use a 5nanometer manufacturing process and will have an 8 percent increase in instructions per clock (IPC) over Zen 3. Zen 3 provided a larger increase in the IPC than Zen 2 did.

A more than 15 percent boost in single-threaded performance and up to 125 percent more memory bandwidth per core is what we can expect from the new architecture, according to the company. Zen 4 chips will come with extensions for AVX-512 and Artificial Intelligence.

A Zen 4 desktop processor with 16 cores will provide more than a 25 percent boost in performance-per-watt and a greater than 35 percent boost in overall performance over Zen 3, according to an announcement by the company.

An image showing AMD's roadmap for its Zen architecture through 2024.

The latest Zen architecture plan from the chipmaker. You can enlarge by clicking.

There will be a version of Zen 4 that uses its vertical cache technology, and there will be a version of Zen 4 that uses both 5nanometer and 4nanometer process nodes, according to an updated Zen roadmap. This is in addition to the Zen 4c architecture that is used by the company.

The situation for Zen 5 will be the same. There will be a Zen 5c variant for cloud-optimized chips.

The next generation of general-purpose server CPUs, Genoa, is on track to launch in the fourth quarter, according to the company. In the first half of the 21st century, the company will release its first lineup of cloud-optimized server CPUs, called Bergamo.

There are different versions of Epyc that serve different product areas. The product groups were divided into two groups, one for general-purpose chips and the other for processors for technical computing. With the upcoming Zen 4c-powered Bergamo chips, Epyc is expanding, again, to cloud-friendly models.

With the general-purpose Genoa chips coming later this year, AMD is promising to deliver “leadership socket and per-core performance” with up to 96 Zen 4 cores as well as “leadership memory bandwidth and capacity” with up to 12 channels ofDDR5 memory. The Genoa chip will provide more than 75 percent faster Java performance than the top third-gen Epyc chip, according to the chip designer.

Genoa chips will have the ability to expand their memory and connect to the internet. We should expect to see improvements in confidential computing features with Genoa, which includes things like memory encryption as well as CXL-related capabilities, according to the company.

The top chip in the line will provide double the cloud container density compared to the third-gen Epyc chip, according to the company. The gain is being driven by the fact that Bergamo chips will feature up to 128 Zen 4c cores and up to 512 threads while supporting 12 channels of memory and PCIe 5.

Bergamo is compatible with the Zen 4 instruction set and won’t require the rewrite of any code for applications, as it is compatible with the Genoa’sSP5 server platform.

An image showing AMD's roadmap for Epyc server CPUs through 2024.

There is a new Epyc server. To enlarge, click here.

The Zen 4 architecture will be extended to two other sets of Epyc chips with the general-purpose Genoa chips and cloud-optimized Bergamo chips coming. Genoa-X will be the successor to Milan-X and will target applications with up to 96 cores and a massive L3 cache.

The intelligent edge and telecommunications are the two new focus areas for Epyc chips. It will come with up to 64 cores in a lower cost platform.

The code name for the fifth generation of Epyc will be “Turin”, according to the teased fifth generation.

The Instinct MI300 chip, which will mix a Zen 4-based Epyc CPUs with a GPU that uses its fresh CDNA 3 architecture, is part of a larger plan by the company to compete with Intel and Nvidia in the accelerator space.

The Instinct MI300 will be the world’s first accelerated processing unit and will be called the “world’s first datacenter APU”.

This means that in the next two years, all of the major chip makers will have hybrid chips. The Grace Hopper Superchip will be released early next year, while the Falcon Shores XPU will be released in 2024.

The Instinct MI300 is expected to deliver a greater than 8x boost in artificial intelligence training performance over the Instinct MI250X, which was launched last fall as part of a set of datacenter GPUs that are more competitive against the A 100 than previous attempts. It’s promising “leadership memory bandwidth and application delays.”

The Instinct MI300 will be powered by the CDNA 3 architecture, which will provide more than a 5x increase in performance-per- watt. This will be possible by using a 5nm process, 3D chiplet packaging, and a unified memory architecture that will allow the CPU and GPUs to share memory.

The Instinct MI300 will use “groundbreaking” 3D packaging for high-bandwidth memory. The architecture’s design will allow the APU to use less power than other implementations.

After teasing some integration plans last month, the company is providing more details on how it plans to use Xilinx’s artificial intelligence and fabric technologies in multiple products.

The “adaptive architecture” building blocks will be known as the XDNA.

The dataflow architecture of the engine makes it well-suited for signal processing applications that need a mix of high performance and energy efficiency.

The adaptive interconnect that comes with the FPGA fabric is powered by the logic and local memory of the FPGA.

After teasing plans in May,AMD said it plans to use the artificial intelligence engine in future Ryzen processors, which will include two future generations of laptop CPUs coming over the next few years The company teased that it will use an artificial intelligence engine in the future

With the goal of giving developers a single interface to program across different kinds of chips, AMD promised that it will unify previously disparate software stacks for CPUs, GPUs and adaptive chips into one.

The first version of the Unified Artificial Intelligence Stack will bring together the ROCm software from Advanced Micro Devices, the processor software from Intel and the artificial intelligence software from Xilinx.

An image showing AMD's plan for a Unified Software Stack.

Software stacks are brought together into one. You can enlarge by clicking.

The ability to use popular artificial intelligence frameworks like PyTorch and TensorFlow will be available to developers.

We’re going to unify more of the things. We are going to have a lot of the same things in our library and in our graph compiler. “We’re definitely going to roll out a lot more pre-optimized models for these targets,” said Victor Peng, who is now head of the adaptive and embedded computing group.

Some details of new consumer-driven products coming out over the next few years were shared at the end of the event.

In the future, consumers can expect two different types of desktop chips, one of which will use the 3D vertical cache technology that was used in the Ryzen 7 5800X3D earlier this year.

An image showing AMD's roadmap for Ryzen CPUs through 2024.

The latest desktop plan from the company. You can click to enlarge.

The code name Granite Ridge will be used for the Zen 5 architecture that will be used on the desktop chips.

The new generation of Ryzen chips, code-named Phoenix Point, will use Zen 4 and the company’s new RDNA 3 architecture for integrated graphics. The next generation, called Strix Point, will use Zen 5 and an improved version of RDNA 3. Both chips will be powered by the XDNA adaptive architecture portfolio.

An image showing AMD's roadmap for Ryzen laptop CPUs through 2024.

The latest laptop plan from the company. To enlarge, click on it.

The upcoming Navi 3 products will be based on the RDNA 3 architecture, which will be disclosed later this year. The company said that it will provide “industry-leading performance-per- watt” as well as ” system-level efficiency” and “advanced multimedia capabilities.”

The RDNA 3 architecture will combine a chiplet design, a next-generation infinity cache and a 5nm process, which will allow it to provide a greater than 50 percent boost in performance-per- watt compared to the RDNA 2 architecture. The RDNA 4 architecture will be used by Advanced Micro Devices in 2024.

Page 14

With an expanded portfolio of chips that cover everything from the edge to the cloud, the company hopes to become a big player in the artificial intelligence compute space.

It’s quite ambitious, given the dominance of Nvidia in the space, as well as the increasing competition from Intel and several other companies.

During the financial analyst day event last week, executives from the chip designer said that they believe they have the right technology to pursue the wider artificial intelligence space.

“Our vision here is to provide a broad technology roadmap across training and inference that touches cloud, edge and endpoint, and we can do that because we have exposure to all of those markets and all of those products,” Lisa Su said in her opening remarks.

She admitted that it will take a lot of work for the company to catch up in the space, but she said the market is the company’s single highest growth opportunity.

At last week’s event, executives from the company said that they have begun to see some traction in the market for artificial intelligence compute with the use of the company’s Epyc server chips for inference applications.

According to Dan McNamara, the head of the AMD’s Epyc business, multiple cloud service providers are already using the ZenDNN library to provide a “very nice performance boost” on recommendation engines.

Zen Deep Neural Network is supported by the second and third generation of Epyc chips and is integrated with popular frameworks such as PyTorch and TensorFlow.

“I think it’s important to say that a large percentage of inference is happening in the processor, and we expect that to continue going forward,” he said.

More artificial intelligence capabilities will be introduced at the hardware level in the near future, according to the company.

The AVX-512 VNNI instruction will be introduced to accelerate neural network processing in the next-generation Epyc chips, code-named Genoa.

VNNI will be present in the company’s Ryzen 7000 desktop chips that are due by the end of the year because of this capability being implemented in Genoa’s Zen 4 architecture.

The use of the artificial intelligence engine technology from the Xilinx acquisition will be used by the company to expand the capabilities of its CPUs.

An image showing AMD's XDNA adaptive architecture IP: the AI engine and FPGA fabric.

These building blocks will be used in future chips. To enlarge, click.

The “adaptive architecture” building blocks that comprise the artificial intelligence engine will be incorporated into several new products in the future.

The artificial intelligence engine will be integrated in two future generations of Ryzen laptop chips. The first is code-named Phoenix Point, which will arrive in 2023, and the second is code-named Strix Point, which will arrive in 2024. The artificial intelligence engine will be used in a future generation of server chips, though it is not known when that will happen.

The first chips using the next- generation Zen 5 architecture are expected to debut in 2024.

The most recent generation of Instinct GPUs, the MI200 series, has made some headway in the field of artificial intelligence training, and it’s hoping to make even more progress in the near future with new software improvements.

For instance, in the latest version of the ROCm software, there are improvements for training and inference workloads.

David Wang, the head of the graphics business at the company, said that the company has expanded ROCm support to its consumer-focused graphics cards.

He said, “We’re developing SDKs with pre-optimized models to ease the development and deployment of artificial intelligence applications.”

Microsoft and Facebook are two of the key leaders in the industry that have developed partnerships with the company.

He said that the ROCm for PyTorch has been improved to deliver amazing, very, very competitive performance for their internal artificial intelligence and open-sourced benchmarks.

The Instinct MI300 is the company’s “world’s first datacenter APU” and uses the company’s new CDNA 3 architecture, which it hopes will help it become even more competitive in the artificial intelligence space.

The Instinct MI300 is said to deliver an 8x boost in performance over the Instinct MI250X chip.

Forrest Norrod, head of the Datacenter Solutions Business Group at Advanced Micro Devices, said that the MI300 is a truly amazing part and that it points the direction of the future of acceleration.

The chip designer made it clear that the acquisition will help the company cover a wider range of opportunities in the artificial intelligence space and strengthen its software offerings If they want to compete with others, the latter is important.

Victor Peng, the former CEO of Xilinx, is now the head of the adaptive and embedded group at Advanced Micro Devices, which leads development for all the FPGA-based products from the portfolio.

Prior to the acquisition of Xilinx, the coverage in the artificial intelligence compute space was mostly in the cloud, at enterprises, and at homes with its chips.

The chip designer has more coverage in the artificial intelligence market with his portfolio now under the banner of the company. Health care and life sciences, transportation, smart retail, smart cities, and intelligent factories are just a few of the industries where the adaptive chips from Xilinx are used. telecommunications providers use Xilinx’s Versal adaptive chips Kintex and Alveo are used in cloud data centers.

An image showing AMD's industry coverage with its CPUs, GPUs and adaptive chips.

Several industries in the artificial intelligence compute space are covered by the products of the company. To enlarge, click here.

The heavy-duty training is happening in the cloud, but we’re in a lot of areas that are doing artificial intelligence.

The products of Xilinx are very compatible with the portfolio of products of the company. The company is targeting its combined offerings for a wide variety of needs.

  • Ryzen and Epyc CPUs, including future Ryzen CPUs with the AI engine, will cover small to medium models for training and inference
  • Epyc CPUs with the AI engine, Radeon GPUs and Versal chips will cover medium to large models for training and inference
  • Instinct GPUs and Xilinx’s adaptive chips will cover very large models for training and inference

When we start incorporating artificial intelligence into more of our products and go to the next generation, we will cover a lot of space across the models.

An image showing AMD's AI application coverage with its CPUs, GPUs and adaptive chips.

Different parts of the artificial intelligence spectrum are covered by different parts of the adaptive chip. To enlarge, click here.

If the company wants broader industry adoption of its chips, it will need to make it easy for developers to program them.

That’s why the chip designer plans to unify the various software stacks for chips into one interface, which it’s calling theAMD Unified Artificial Intelligence Stack. The first version of the ROCm software will bring together the software from the two companies to provide a unified development and deployment tool for inference workloads.

The company plans to consolidate even more software components in the future, so that, for instance, developers only have to use one machine learning graph compiler for any chip type.

In the same development environment, people can hit any one of these target architectures. He said that in the next generation, we’re going to unify more of the middleware.

It will require a lot of heavy lifting and doing right by developers for a strategy like that to work.

Page 15

Interview 2023 is shaping up to be a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can’t own the company.

The new Arm-based chips are expected to be used in several system vendors next year. The Grace-Hopper Superchip brings together one GraceCPU with one HopperGPU, and is a part of these.

American companies like Dell Technologies, HPE and Supermicro, as well as Hong Kong’s Inspur and China’s ASUS, are among the vendors lined up for the server business. Artificial intelligence training and inference, high-performance computing, digital twins, and cloud gaming and graphics are some of the areas where the server will focus.

The chip designer is hoping to lure operators and developers to the Arm side with the promise of some major improvements over x86 chips currently in the market.

Up to 1 terabytes of error-correcting LPDDR5x memory and as much as 1 Terabyte/s of memory bandwidth are included in the Grace Superchip. The Grace Hopper Superchip’s two CPUs are able to communicate with each other thanks to the 900GB/s NCLink-C2C technology that is being used by Nvidia.

“What Grace allows us is to push the boundaries of innovations and address the gaps that are there in the market,” said Paresh Kharya, the director of datacenter computing at Nvidia.

He claimed that the 900GB/s speed is seven times faster than the Gen 5 technology that will be used with the upcoming Sapphire Rapids server chips from Intel and Genoa. He said there was nothing else that matches the speed.

2x higher energy efficiency for the memory subsystem thanks to the use ofLPDDR5x and 2x faster memory bandwidth compared to systems currently available in the market were some of the major claims brought up by Kharya.

An estimated score of 740 on the SPECrate 2017_int_base benchmark is how a system with the Grace Superchip will perform when it comes toCPU-bound tasks. If we take their numbers into account, the system would be 50 percent faster than the DGX A100 system, which uses two 64-coreAMD Epyc 7742 processors.

The Grace Superchip was compared to an x86 processor three years ago, and the DGX A 100 is the “top of the line server” today for artificial intelligence applications.

We love all the innovation that comes to the market from x86 CPUs, and we and our customers are able to take advantage of it, but at the same time we are able to push the boundaries of innovation and fill in the gaps.

To take advantage of these capabilities, operators and developers need to make a huge leap from the comfortable world of x86 systems to the interesting world of Arm server.

It may seem like a big leap, but the partnership with Arm has helped prepare the server software ecosystems. The expansion of support for the CUDA programming model along with its full stack of artificial intelligence and high performance computing software was announced back in 2019. More of Nvidia’s software is compatible with it.

We’ve been on a constant journey towards that since we announced our CUDA on Arm project a while ago. All of our key stacks are supported by Arm, including our artificial intelligence platform, the Omniverse platform for digital twins, and the NvidiaHPC platform. We’re working with the entire system to ensure readiness.

Ampere Computing’s Arm-based Altra chips are already included in the market and the company is making sure they provide the best possible performance.

The US Department of Energy’s Los Alamos National Laboratory plans to use both Grace and Grace Hopper Superchips in its next-generation Venado supercomputer, which will be the first of its kind in the world.

As organizations start putting the company’s server designs through their paces, the true test will play out, as Nvidia tries to convince the datacenter world of Arm’s differentiation.

Page 16

The upcoming DGX H100 artificial intelligence system will feature the flagship H 100 graphics card from Nvidia.

During a discussion at the BofA Securities Global Technology Conference Tuesday, Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPUs choice. The DGX family is the premier vehicle for its graphics cards, pre-loading the machines with its software and adjusting them to provide the fastest artificial intelligence performance as individual systems or in large clusters.

Since the DGX system was announced in March, we and other observers have had a question about which next-generation x86 server processor would be used.

The DGX H 100 will arrive by the end of the year, and eight H 100 will be based on the Hopper architecture, according to a previous promise. The chip designer has claimed that a single system will be capable of delivering 32 petaflops of artificial intelligence performance using its FP8 format by using the fourth-generation NVLink interconnect.

As the company plans to introduce its first Arm-based server CPU, Grace, next year, Huang voiced his continued support for x86 CPUs and confirmed the selection of the DGX H 100. He said that the company will use the Rapids for new machines.

A lot of x86s are purchased by us. We have great partnerships. For the Hopper generation, I’ve selected the Sapphire Rapids to be the processor, and it has excellent single-threaded performance. We’re trying to get it for hyperscalers around the world. We’re getting it to be used in all over the world. It’s going to be our own server, our own DGX. He said at Tuesday’s event that it would be qualified for our own computers.

The selection of Intel’s upcoming Sapphire Rapids chip, which has already started shipping to some customers, marks a reversal of sorts for Nvidia after it choseAMD’s second-generation Epyc server CPU, code-named Rome, for its DGX A 100 system that was introduced in 2020.

This comes after industry publication ServeTheHome reported in mid-April that Nvidia had designs for both Rapids and Genoa for the DGX H100, but it was not yet known which x86 chip it would use.

While Intel will consider this a victory as the company works to regain technology leadership after years of missteps, it’s a relatively small win when considering the bigger battle over GPUs and other accelerators that is playing out between them and other companies It’s why, for instance, Intel is making a big bet on its upcoming Ponte Vecchio GPUs and whyAMD has pushed to become more competitive against Nvidia with its latest Instinct GPUs

In order to speed up the flow of data between the two components, Nvidia has decided to build its own Arm-compatible processor.

The first iteration of this design, called the Grace Hopper Superchip, will be introduced next year along with a new kind of DGX system that will use Grace, we think. The Falcon Shores XPU will be Intel’s first server with a CPU-GPU design.

During Tuesday’s talk, Huang promised that Grace will allow the company to fine tune everything from the components to the systems to the software. While the Arm-compatible chip is designed to benefit recommender systems and large language models used by hyperscale companies, it will be used for other applications too.

Grace has the advantage in every single application domain that we go into, whether it’s machine learning, cloud gaming, or digital twin simulations, because we have the full stack lined up.” In all of the spaces that we’re going to take Grace into, we own the whole stack, so we have an opportunity to create the market for it.

Page 17

The x86 giant admitted that a broader release of the server chip has been delayed.

In a Tuesday panel discussion at the BofA Securities Global Technology Conference, Intel’s datacenter boss confirmed the delay of the Xeon processor, code-named “Sapphire Rapids.” At the same event, the CEO of Nvidia said that the company’s flagship DGX H 100 system would not use the Genoa chip from Advanced Micro Devices.

After falling behind in technology over the past few years, Intel is trying to get back in the game with the introduction of the next generation of Xeon. With industry-first support for new technologies such as PCIe Gen 5 and Compute Express Link, Intel hopes it will beat the next-gen Epyc chip from AMD.

There have now been multiple delays. In June of last year, Intel said it was postponing production of the chip from the fourth quarter of 2021, to the first quarter of 2022, with plans to ramp up shipments in the second quarter. After several years of delays, Intel was able to make the 10nanometer chip viable for mass production.

A rendering of Nvidia's new DGX H100 system READ MORE

Rivera said at Tuesday’s event that Intel will start ramping up production of Sapphire Rapids later in the year than they had originally anticipated. She said the delay is needed for more time for platform and product validation.

She pointed to Intel’s new chips for PCs and laptops as proof that Intel 7 is doing well.

Rivera said that the product will be a “leadership” product when it’s made available. She said that the period during which the title will be held will be shorter than expected because the delay means that the Genoa chip, which was slated to launch later this year, will now arrive soon after Sapphire Rapids begins to ramp.

“We would have liked more of that gap, more of that leadership window for our customers in terms of when we originally forecasted the product to be out and ramping in high volume, but because of additional platform validation that we’re doing, that window is a bit shorter.” She said that it depends on where the competition is.

We would have liked more of that gap, more of that leadership window for our customers in terms of when we originally forecasted the product to be out and ramping in high volume

Rivera said demand for the chip is still very high despite the delays. She admitted that not all customers will move in one step.

The support for new technologies likeDDR5 and its improvements in performance and total cost of ownership will be taken advantage of by a compute heavy company like Nvidia.

Ice Lake is continuing to grow. We had record revenue and volume in the first quarter of the year. She said that Ice Lake will be the highest volume product as we ramp up Sapphire later in the year and throughout ’23.

According to Rivera, the follow-up chip will provide a “nice performance boost in terms of the memory, networking and overall performance” while fitting into the same sockets.

She said that it will make Emerald Rapids easier to upgrade for customers, and that it will give them a bigger return on their investment.

We think there will be an update on Genoa and future generations of Epyc chips at a financial analyst event that will be streamed on Thursday. In a May update, the chip designer said it was on track to launch Genoa in the second half of the year.

It’s important to remember that there are other threats to Intel in the datacenter on theCPU side. Companies like Amazon Web Services and Ampere Computing are claiming advantages against Intel’s chips with new processors based on Arm’s architecture.

Rivera said that Arm’s share of the server market is small. She acknowledged that cloud service providers are interested in chip architectures that offer an alternative to the way Intel’s Xeon chips have traditionally been designed, which is why the Semiconductor giant plans to introduce the Sierra Forest chip in 2024, that uses its efficiency core design.

A lot of the cloud customers that are looking at efficiency core types of workloads don’t want all of the extras. She said that they just want high-density throughput, single-threaded performance, and lots and lots of cores for some of the work.

Page 18

According to a recently published study, the cloud has come out on top in terms of the performance of the cloud’s compute power.

In tests of performance in the three most popular cloud providers, the multi-core x86-64 microprocessors Milan and Rome beat the IntelCascade Lake and Ice Lake instances, according to research from database company CockroachDB.

The researchers used the CoreMark version 1.0 benchmark, which can be limited to run on a single vCPU or execute workload on multiple vCPUs, and found that the Milan processor was better than the Ice Lake processor.

In the past, we saw Intel lead the pack in overall performance, with competitors competing on price-for- performance metrics. This year, both the overall performance leader and the price-for- performance leader were based on the same platform.

The t2d instance was followed by the n2 standard instance running Intel Ice Lake processor. The large M6i instance, which uses Ice Lake processors, finished third, and other instances rounded out the top ten. Two of the azure instance types had individual runs that could have broken into the top ten, but when looking at the median runs they were less performant than either of the other two.

All three major cloud providers have the same price-competitive offers, according to a study.

All three clouds were in a statistical dead heat in terms of price and performance. Depending on the requirements of a specific workload, even instance and storage combinations that are a bit more expensive are potentially very competitive.

Storage and transfer costs are more important than the total cost to operate on a given cloud provider according to a database company.

Storage and data transfer can become hidden costs, having a larger impact on total cost than the price of the instances themselves, especially when it comes to building a highly resilient stateful application, it warns.

“If there is one point to take away from this year’s report, especially if I was a CIO or CTO building a globally distributed application concerned about cost when picking a cloud provider,” said McClellan, the director of partner solutions engineering at Cockroach Labs. The network transfer cost is where I would focus. Our findings shine a light on the total cost to operate.

Page 19

The latest list of the world’s 500 fastest publicly known supercomputers shows the chip designer has become a darling among organizations using x86-based clusters.

The most recent update of the list was published on Monday.

Frontier is the world’s first publicly benchmarked exascale supercomputer and it achieved a peak performance of 1.1 exaflops, based on the standard Linpack benchmark used to measure the world’s top systems.

It was only a few years ago that Intel and the DOE said that the Intel-powered Aurora would be the first exascale system in the US, but delays have pushed the date back to sometime later this year. It is thought that these delays caused Intel to change the delivery date from 2021 to 2022, and to remove the mention of Aurora being the first US exascale supercomputer.

As a fun side note, The Register noticed Intel edited its 2019 press release about Aurora to remove the mention of it being "the first exascale supercomputer" and to change the delivery date from 2021 to 2022. You don't often see companies editing old press releases like this.

— Semiconductor News by Dylan Martin (@DylanOnChips) May 31, 2022

When considering systems that don’t have publicly submitted benchmark results, Frontier may not be the world’s fastest supercomputer. There are two systems in China that have reached a peak performance of 1.3 exaflops, but the systems’ operators have yet to submit their results to Top500.

The company’s CPUs only accounted for six of the world’s fastest 500 supercomputers.

The update that was just released showed that 93 of the top 500 were powered by the same processor. In the spring of last year, it was almost double the share of the list.

The chip designer’s CPUs is present in five of the top 10, 10 of the top 20, and 26 of the top 50, and 41 of the top 100.

The x86 giant’s share of the Top500 has fallen to 388 systems from 464 five years ago, with the list’s spring 2022 update bringing the x86 giant’s share below four-fifths of total systems for the first time in ten years.

The company’s CPUs are present in one of the top 10, five of the top 20, 15 of the top 50, and 46 of the top 100, but that’s not all.

One of the things that has helpedAMD gain traction over the last few years is the fact that its Epyc server CPUs have had higher core counts than Intel’s Xeon CPUs, which makes them well-suited for applications that scale well with cores.

This is reflected in the latest Top500 list, with 27 percent of the totalcores being from theAMD Epyc cores. It makes sense that Intel’s CPUs are still present in most systems, as Intel’s cores represent 45% of total cores.

It’s important to remember that the world of high performance computing isn’t all about x86 chips. There are 19 supercomputers with chips that weren’t designed by Intel orAMD.

IBM’s Power chips are used by seven of them, which power systems in the 4 and 5 spots. Nine percent of all cores are powered by the A64FX chips, which power Japan’s Fugaku system in the No. 2 spot.

The NEC Vector Engine chips represent a very small percentage of cores. The Sunway TaihuLight system is powered by China’s own ShenWei chip, which makes up more than 10% of all cores. The first-generation Zen architecture is used by the joint venture in China that uses the hygon Dhyana chip.

In the Top500, 168 of them are using graphics processing units. The fact that most of the world’s fastest systems don’t use such components is indicative of the fact that HPC applications have taken advantage of such components. 157 of the 157 are from the same company. None of the other vendors come close to the share of Nvidia.

There was a slight increase in the Top500’s share of the graphics card market thanks to seven new systems that combine the new Instinct MI250X with the third generation of the Epyc chips.

Frontier and two other systems are in the top 10 in the list. The chips used in these systems are code-named Trento and have a feature called cache coherency that allows the two processor types to share memory more easily.

Considering the fact that the number of systems with accelerators continues to increase, which gives Nvidia an opportunity to defend its footprint, it’s clear thatAMD has a way to go before it can take any meaningfulGPU share from the company.

There are a few curiosities in the Top500. There are two systems that still use Intel’s discontinued Xeon Phi accelerators. The National University of Defense Technology in China developed the matrix-2000 accelerator. The “Deep Computing Processor” is a Chinese accelerator.

There are two systems in Japan. The PEZY-SC3 was developed by the country’s PEZY Computing company. Japan’s Preferred Networks created the MN-Core.

We should remember that Intel is hungry to make up for the mistakes it has made in the past several years and create more competitive chips again, as evidenced by the latest update from Top500.

More systems in the future using chips based on Arm and other alternative architectures is a possibility given that Europe and China are increasingly looking at designing their own. The company shouldn’t get lulled into a sense of security.

Page 20

PC makers who faced shortages earlier this year because of a shortage of Threadripper workstations will finally be able to sell them later this month.

The Ryzen Threadripper Pro 5000 will be available to leading system integrators in July and to do-it-yourself builders through retailers later in the year. The Threadripper Pro 5000 would be released by Dell in the summer.

The coming wave of Threadripper Pro 5000 workstations will end the exclusive window thatLenovo had with the high- performance chips since they launched in April.

Smaller companies, which we call system integrators, were experiencing a severe shortage of last-generation Threadripper 3000 CPUs in the first half of 2022.

There are fewer options for buyers due to the lack of supply betweenLenovo and other companies. This was a big deal in the workstation world because of the fact that AMD has been seen as the go-to choice for high-end desktops, thanks to its faster and better capabilities than Intel’s chips.

Maingear, Puget Systems, and Velocity Micro told us a few months ago that the Threadripper shortage was slowing their business and forcing them to recommend Intel-based systems in multiple cases.

The good news is that you’ll be able to use a Threadripper Pro 5000 chip on a WRX80-based board.

While the expansion of Threadripper Pro 5000 availability is a positive development for workstation vendors and buyers, some industry players knew it was likely to be the end of the non-Pro Threadripper CPU.

We shouldn’t expect to see a Threadripper 5000 lineup like we did with the 3000 and previous generations, because this is a simplification of the platform. The chip designer said that it served what the most demanding enthusiasts and content creators value most in the platform.

In painting the news as a positive development, the company said Threadripper Pro 5000 will give users 128 lanes of Gen 4 connectivity, 8 channels of UDIMM and a massive L3 cache, plus management and security features that come with all Ryzen.

The Threadripper Pro parts are more expensive than the non-Pro parts that were more enjoyed by the consumer set.

The Threadripper Pro brand was introduced by AMD in 2020.

The capabilities of these chips were made with professionals in mind, from higher-capacity, error-correcting memory to more than double the PCIe lanes, and they were a branch off the regular Threadripper processors.

The Threadripper Pro chips are more expensive than other chips. Tom’s Hardware noted last year that the Threadripper Pro 3995WX had a recommended price of $5,489, which was $2,099 higher than the consumer-friendly Threadripper 3990X. The price difference between the two versions was $750.

It’s not just the cost of the processor that is higher. Puget Systems said that the motherboards can help pump up the price of an overall system compared to non-Pro Threadripper chips.

In order to accommodate the high number ofPCI-Express lanes and memory channels these chips offer, a larger tower chassis is required. The company tried to explain what was happening with Threadripper in a May post, saying that what used to fit in a mid-tower for a reasonable price now requires a full tower case and costs thousands of dollars more.

We will grant that Threadripper Pro systems are more affordable than workstations using AMD’s server-grade Epyc chips, but those hoping to build a workstation-ish system on a budget may want to check out the latest high-end consumer CPUs from both Intel andAMD.

Page 21

The new physical format of the small desktop workstation is smaller than previous designs, but still has the type of performance professional users require.

At the end of this month, the ThinkStation P360 Ultra will be available, but it won’t have the Xeon chips that we’re used to.

Many professional users will be pleased by the support for up to eight displays, as well as the ability to use plug-in M.2 cards to store up to 8 Terabytes of data. In the US, pricing is expected to start at $1,299.

exploded view of lenovo workstation

ThinkStation P360 Ultra is a computer.

The new system is not the smallest one made by Lenovo. The updated version of the P350 Tiny is the 1 liter ThinkStation P360 Tiny. The high-end professional components in the new form factor are what you would expect from a tower format.

According to the company’s own tests, the new system is more than 50 percent better than the previous generation small form factor desktop workstations.

A compact form factor that can fit both the RTX A5000 graphics card and the cooling needed to support it was developed by Lenovo. The system has an unusual layout with a dual-sided board positioned in the middle of the case, which offers increased air flow to the processors that run up to 125W.

Rob Herman claimed that the desktop workstation was built to deliver impressive performance in a space-saving form factor.

The ThinkStation P360 Ultra is tested to pass the most demanding of standards, but not the most demanding standard for ruggedized hardware.

Page 22

To cater to customers in Europe, Middle East and Africa, Lenovo has opened its first manufacturing facility in Europe.

lenovo first euro manufacturing facility -

There is a new manufacturing facility in the Pest area. The picture is ofLenovo

A central location within Europe and strong infrastructure were some of the reasons why the factory was located in ll in the charmingly named Pest county.

Some of the investment is backed by local government incentives. The lower wage structure in the country played a role in the selection process.

The site employs over 1,000 full-time staff in a range of engineering, management and operational roles, and the numbers are continuing to increase as the facility moves towards full capacity.

The initial plan was for the new factory to open in the spring of 2021.

Francois Bornibus said the company had reached a “milestone” in the evolution of its global manufacturing network, which includes a mix of both in-house and contract manufacturing.

“Hungary’s well connected location puts us closer to our European customers so that we can fulfill and sustain their needs while remaining at the forefront of innovation,” he stated. “As our business continues to grow around the world, this incredible new facility will play a key role in our plans to ensure future success and bring smarter technology for all to Europe more sustainable, quickly and efficiently.”

The new site is one of the largest for the company. The production lines are said to be capable of making more than 1,000 server and 4,000 workstations per day.

A building management system was built into the factory to monitor temperature, humidity, and asset conditions.

The new factory was fitted with solar panels that could provide half a megawatt of energy, or enough to power the equivalent of a small village. Combined with new manufacturing processes, such as a patented low- temperature solder process, this will contribute to wear achieving climate goals.

After buying IBM’s X86 server business, Lenovo reported its first annual profits in the infrastructure group.

Quarterly revenue for Q4 was $16.7 billion, a 7 percent increase over the same quarter in the previous year, while annual revenue was $71.6 billion, an 18 percent increase and annual net income was $2 billion.

Similar topics

Narrower topics

Page 23

The range of portable workstations has been slashed.

The ThinkPad P16 is a Chinese PC. The Register has confirmed that the ThinkPad P15 and P17 will be retired.

All the way to 16-core i9 models, the P16 machine is powered by Intel 12th Gen HX. It’s an option to have a graphics card from the company.

Storage can reach 8 terabytes and memory can reach 128 gigabytes. The machine appears to have a singleUSB-A,USB-C, and HDMI port.

The machine is built to combine the best features of the P15 and P17 into an all new compact and improved form factor and the graphic below shows that.

Lenovo portable workstation roadmap

There is a portable workstation plan by the company.

The Register inquired about the meaning of that graphic, and was told that it means that there is only one portable workstation.

The P17 has an Intel Xeon processor and a 17-inch screen, but neither is offered by the P16, according to the graphic. The replacement for the Xeon is Intel’s new HX Silicon. The Core i9 HX is not the best option for mobile workstation users. P17 owners will be robbed of a little screen real estate if they are moved to a 16-inch screen. P15 users need to carry a larger machine.

Linux wasn’t offered as a pre-installed option on the P15, and it wasn’t good enough for the P16’s memory and storage.

The P16 looks like a combo of the two previous offerings.

We don’t know why it has decided to reduce its range, but a likely reason is that mobile workstations are not a high volume product so the company cannot sustain two models. If we get substantive information from Lenovo, we will update the story.

There is a new update at 20:45 UTC, May 20th. Errors in the laptop’s specifications have been corrected in the article.

Page 24

Intel wants 593m in interest charges after successfully appealing Europe’s 1.06 billion antitrust fine.

After years of fighting the fine, the x86 chip giant was told it didn’t have to pay up after all. The US tech titan says it is trying to get damages for being screwed around.

According to official documents published on Monday, Intel has gone to the EU General Court for payment of compensation and consequential interest for the damage sustained because of the European Commission’s refusal to pay default interest.

An analysis of the European Central Bank’s refinancing rate, which was 1.25 percent when the penalty was approved in 2009, and has increased 3.5 percent points since, shows that Intel is owed more than half the value of the fine in interest.

The court has been asked by Intel to impose additional interest on late payments of charges.

This is the same Intel that wanted to build a factory in Germany.

The European Commission and the chip goliath are in a fight over alleged anticompetitive conduct.

Intel gave its hardware partners incentives to use its x86 processors, putting its competitors at a disadvantage. According to a report, major computer makers, including Dell, HP, and Lenovo, were given incentives by Intel to use its chips over those of rival company, Advanced Micro Devices. Intel was accused of paying a German electronics retailer to not sell computers with competitors’ components.

A 1.05 billion penalty was leveled against Intel after a five-year investigation concluded in 2009.

The result of Intel’s anticompetitive conduct was a reduction of consumer choice and lower incentives to innovate.

In 2012 Intel unsuccessfully appealed the fine. The European Court of Justice sent the case back to the General Court after the chipmaker brought it to their attention.

After more than a decade of debate, the court sided with Intel, calling the commission’s analysis incomplete and saying it had failed to establish a legal standard that the “rebates at issue were capable of having, or were likely to have, anticompetitive effects”.

The saga isn’t over. The European Commission said in April it would appeal the court decision. The appeal is still ongoing.

Page 25

According to a report, Intel is set to receive $7.3 billion in subsidies for a massive chip manufacturing campus it’s planning in Germany, and the x86 giant won’t have to worry about TSMC setting up shop somewhere nearby for the time being.

Local media reported last week that Martin Krber, the city’s representative in the Bundestag, disclosed the German subsidies for Intel’s planned Fab site in Magdeburg. According to Krber, the federal government has allocated over two billion dollars for the project in the upcoming budget.

According to Germany’sDeutsche Presse-Agentur, the government is considering subsidies for other microelectronics projects.

The news is likely to be of some relief to Intel CEO Pat Gelsinger who has been begging the US Congress to pass chip subsidies in America for its planned Fabs in Ohio and Arizona. The House of Representatives and the Senate have been working on the CHIPS for America Act.

The initial 17 billion that Intel plans to spend on the mega-site will be covered by Germany’s subsidies. The project is part of a larger investment in Europe planned by the American chipmaker that will include an R&D and design hub in France as well as manufacturing, foundry and chip packaging service operations in Italy, Poland and Spain.

In the first phase of the project, Intel’s massive site in Magdeburg will consist of two neighboring Fabs that will occupy the space of two football fields. Tens of thousands of additional jobs at suppliers and partners are projected to be created by the chipmaker’s campus, as well as 3,000 permanent high-tech jobs for the company.

In the first half of the 20th century, the plants are expected to begin manufacturing chips using advanced technology.

The same can’t be said for Taiwan’s TSMC, the world’s largest contract chip manufacturer that makes chips for companies like Apple, and Intel.

The chairman of TSMC said on Wednesday that the company has “relatively fewer customers” in Europe and that it has no concrete plans to open a factory there.

A year ago, TSMC said it was in the early stages of considering an expansion into Germany, but the chipmaker apparently hasn’t made much progress with the plans.

EU officials have been working with Taiwan’s government in a bid to lure the island nation’s chipmakers to set up shop in Europe.

The EU’s proposed European Chips Act was revealed in February to boost the bloc’s competitiveness and resilience in semiconductors while also supporting digital transformation and environmental goals.

Last week, Taiwan’s Ministry of Economic Affairs announced a “major breakthrough” in talks with the EU about cooperation in the Semiconductor industry, which could pave the way for Taiwanese chipmakers to build new facilities in Europe.

It seems that the EU shouldn’t count on TSMC and instead look at Taiwan’s other foundries which have less advanced manufacturing technology.

Page 26

If the United States and its allies impose sanctions against the Middle Kingdom like they have against Russia, China should seize Taiwan to gain control of TSMC.

The US suggested last year that Taiwan should destroy its chip factories if China invaded it.

The China Center for International Economic Exchanges chief economist delivered a speech at the China-US Forum at the end of May. The speech’s text was posted on the online news site.

Chen said that a confrontation between the two powers would be a disaster for mankind and that China and the US needed to ease their hostile relations.

She claimed that the US was attempting to create two large “anti-China” trade bodies, despite the US pulling out of the Trans-Pacific Partnership.

According to a translation of the text, Chen said that China needs to take steps to secure its industrial chain and supply chain and make strategic preparations to deal with the United States’ insistence on breaking the chain.

If the US and allies impose sanctions on China like those against Russia, China must recover Taiwan and “seize TSMC, a company that originally belonged to China,” this means.

Chen said that they are speeding up the transfer to the United States and building six factories in the United States. “We must not allow all the goals of the transfer to be achieved” is a possible reference to the US CHIPS Act, which seeks to encourage the building of Semiconductor Fabrication Plants on US soil, which may include funding going to TSMC for chipmaking facilities it is building in Arizona.

Chen’s speech suggests that China should only take this action as a response to threats against its economic security, and there is no reason to believe that China will follow in the footsteps of Russia and impose sanctions on another country.

If the Taiwanese government adopted the scorched-earth policy proposed by the US Army War College last year, any attempt by China to seize Taiwan would be pointless.

Taiwan’s best deterrent against potential Chinese aggression is to put in place a credible strategy to destroy its manufacturing facilities if an invasion were to occur, which would deprive China of the supply of its semiconductors. Semiconductor Manufacturing International Company owns facilities on the island.

Taiwan is seen as vital by both the US and China because it is a large part of the world’s chip manufacturing capacity. The island has 48 per cent of the global foundry market and 61 per cent of the world’s capacity to fabricate chips using a 16nanometer process.

China, which last year produced only one in six of the chips that its industries used, set an ambitious goal to be 70 percent self-sufficient in chips by the year 2025.

In the first quarter of 2022, TSMC reported revenue of $18.6 billion, a 36 percent increase over the same quarter a year ago. The company expects sales to grow at the same rate in the current quarter due to high demand in the automotive and high performance computing markets.