Minimum requirements for laptop

Community Support for the machines running the game
User avatar
thebs
Master
Master
Posts: 732
Joined: Sat Apr 16, 2016 6:49 pm
CMDR: thebs
CMDR_Platform: None Specified
Contact:

Re: Minimum requirements for laptop

Postby thebs » Mon Feb 25, 2019 2:32 am

LeDoyen wrote:gaming on laptops is usually not a good idea anyway.
And don't be impressed by lithography numbers. AMD will always manage to provide 7nm cooking stoves to gamers worldwide :)

I disagree. Both AMD and Intel have to 'throttle' or 'thermal,' same difference.

In fact, Intel using lower features sizes for peripherals, while AMD puts everything SoC (system-on-a-chip) is why total system thermal/power consumption for Intel is just as bad, or worse -- beyond the CPU.
Image
CMDR TheBS - Yet Another Middle Aged American (YAMAA) and Extremely Casual Gamer often confused for a Total Noob

LeDoyen
Expert
Expert
Posts: 342
Joined: Fri Jul 28, 2017 7:48 pm
CMDR: Le Doyen
CMDR_Platform: PC-MAC
Contact:

Re: Minimum requirements for laptop

Postby LeDoyen » Mon Feb 25, 2019 11:19 am

Look at TDP, and die size.
AMDs can't match the Intel architecture so they keep multiplying the number of cores to get closer or slightly above their performance which in turn explode their power requirements and TDP per chip.
an i7 9700K still has a TDP of 95W and barely heats up while the latest threadrippers are ungodly power mongers with twice the TDP (but not twice the performace) that can easily warm up a reasonably sized flat.
I do not know the details of each architectures, secret as they are, but it seems AMD goes for a brute force approach, with the consequences we know. That's why they rush to get 10 and now 7nm node production out.
If they were still at 14 - 12 like Intel, they would struggle with heat and throttling issues.

Edit : Not saying that the brute force approach is bad tho. It's awesome in heavily multithreaded applications after all. and AMD's costs are still significantly lower than Intel's . Just a matter of picking what you prefer/really need

User avatar
thebs
Master
Master
Posts: 732
Joined: Sat Apr 16, 2016 6:49 pm
CMDR: thebs
CMDR_Platform: None Specified
Contact:

Re: Minimum requirements for laptop

Postby thebs » Wed May 15, 2019 11:49 pm

LeDoyen wrote:Look at TDP, and die size.
AMDs can't match the Intel architecture

Which architecture? Pre-Zen? Sure. AMD hasn't had an advantage since the early Hammer. Post-Zen? Not so fast.

Not only is i-Core is falling behind. Plus now that Zen2 (3rd gen) is on TSMC's 7nm feature size, Intel is the one in trouble. It's more than just cores. Although ARM was ahead of AMD at TSMC, because ARM has the economies-of-scale advantage (more on that in a bit).

Intel used to have fabrication advantages in their belt. Now they do not. They haven't redesigned the core in over 8 years. AMD just redesigned in the last 2 almost 3, and Zen2 is a killer, 3rd gen, refit.

The combination means Intel is now behind. Heck, Intel Xeon is starting to get toasted by ARM in the data center, not just on power, but performance. ARM will be taking over ... AMD has known that since 2010, and licensed for a reason, and even Intel finally gave in. The only reason ARM hasn't taken over on the gaming PC is because of Windows and its heavy, MFC C++ legacy of Win32/x86, unportable code.

The nice, added reality about ARM is that there is actually a competitive GPU market, unlike PC, where it's only AMD and nVidia.

► Show Spoiler


Also keep in mind that AMD TDP can be higher because they don't use external chipset logic, their processors are SoC (system-on-a-chip). That's why AMD's mainboards are so dirt cheap, they are largely 'pin-outs' from the high pin-count of the AMD CPUs.

Intel is notorious for not only having more expensive mainboards, but chipsets at 25nm feature sizes sucking up 10W+ alone. Sure, it's not on the chip, but still consuming power.

LeDoyen wrote:so they keep multiplying the number of cores to get closer or slightly above their performance which in turn explode their power requirements and TDP per chip.

That's no longer true, especially not with Zen2.

Even Zen, let alone the refit Zen+, was within 8% of Intel single core performance, MHz for MHz. In fact, Intel 'cheated' by having very large L3 caches, where AMD more efficient in the L1-L2.

We'll see where Zen2 is, single-core wise, in the real benchmarks after release.

LeDoyen wrote:AMD is moving to 7nm before Intel. And more games are threading more and more, removing Intel's single core, non-threaded advantage.

Intel really screwed up on ignoring a lot of things.

LeDoyen wrote:an i7 9700K still has a TDP of 95W and barely heats up while the latest threadrippers are ungodly power mongers with twice the TDP (but not twice the performace) that can easily warm up a reasonably sized flat.

You're really stretching the truth there. Also keep the core counts in mind. With less cores, AMD would have lower consumption, while its single-core and 2-4 core benchmarks were still within 8%.

And, again, keep in mind that the chipset is on CPU. Intel doesn't do that.
Lastly ... Intel removes the GPU in several models, which is also a factor in overall head.

LeDoyen wrote:I do not know the details of each architectures, secret as they are, but it seems AMD goes for a brute force approach, with the consequences we know. That's why they rush to get 10 and now 7nm node production out.
If they were still at 14 - 12 like Intel, they would struggle with heat and throttling issues.

AMD's Zen architecture is far, far more efficient than prior, especially Zen+ (2nd gen). We'll see where Zen2 (3rd gen) falls.

The high core count is just because AMD can do it.
Intel is way, way behind.

E.g., the original i9 was little more than their Xeon E5 rebranded as a consumer processor -- they didn't realize, unlike AMD -- that not only are there a lot of threaded applications that gamers use, like video streaming, but more games (e.g., Far Cry 5) are starting to be well-written.

There are a lot of libraries out there for threading, as well as entire, thread-safe languages now (e.g., Mozilla's Rust), that can make good use of 8-12 real cores. Not just 4-6 cores with 2-threads per core, but 8-12 real cores, and even 16-24 threads.

LeDoyen wrote:Edit : Not saying that the brute force approach is bad tho. It's awesome in heavily multithreaded applications after all. and AMD's costs are still significantly lower than Intel's . Just a matter of picking what you prefer/really need

AMD's costs are significantly lower because AMD goes for a SoC approach.
Intel still hasn't embraced that ... except for embedded.

BTW, just because Intel has been traditionally better at lossy SIMD gaming math doesn't mean it's better at FPU precision (AMD is king there -- always has been, always will be).

And on the low-end, Intel has finally 'thrown in the towel' on Atom.
Not only did ARM best Atom in in-line execution, but speculative, superscalar designs too -- every time a new Atom came out, ARM bested it.
That's why Intel re-licensed ARM (2016) almost exactly a decade after they sold their ARM division to Marvell (2006).

The sooner x86 dies, the happier I will be. Binary translation is coming back for the few, remaining stragglers. Although the overwhelming supermajority of the cloud is already non-Win32/x86 solutions, right down to Microsoft itself using GNU/Linux software defined infrastructure. That's why they are trying to port so much over to GNU/Linux themselves.

The funny thing is that i-Core in low power states is better than superscalar Atom, performance per watt.
AMD also has far more options, thanx to its SoC approach that is everything.

A lot of us used AMD Socket-FT3 in embedded solutions for a reason ... beyond just cost.
Atom sucked up a lot more power for less performance ... and -- again -- running an Pentium, i3 or even i5 in a low power state was better.
Image
CMDR TheBS - Yet Another Middle Aged American (YAMAA) and Extremely Casual Gamer often confused for a Total Noob

User avatar
thebs
Master
Master
Posts: 732
Joined: Sat Apr 16, 2016 6:49 pm
CMDR: thebs
CMDR_Platform: None Specified
Contact:

Re: Minimum requirements for laptop

Postby thebs » Thu May 16, 2019 12:08 am

Oh, BTW, the 'big advantage' Intel has is that they 'control' the GPU interconnect, and it's largely software.

The few, specialized systems that use AMD's advanced I/O MMU even last decade with direct GPU 'as a peer to the CPU' roasted Intel. Had AMD been able to control the GPU interconnect, we wouldn't be using PCIe, or even AGP before that. We're talking more linearly GPU unit increases on a killer scale.

And that is where ARMv8 and GPU options and innovation really destroy x86-64 and its legacy, Intel-centric, inefficient software, ecosphere. I cannot emphasize enough how horrendously poor PCIe and all of the hacking in software that is required for GPUs. It's why Intel has so many 'sideband attack vectors' through it's crappy, Nehelem legacy, because they didn't design a real TLB and I/O MMU.

AMD did ... from day 1 ... in x86-64. And they designed ways for the GPU to be on the same interconnect as the GPU. Intel could not before Nehelem, which is why we got PCIe, and now post-Nehelem, we're still dealing with their issues due to lack of hardware design. Sure, there are other exploits for general x86-64, and even ARMv8, but the Intel-specific stuff is directly related to all those software hacks and having to use something like PCIe.

It's infuriating to see GPU performance held back by the Intel-centric PC architecture with PCIe and software-based control, let alone the added security issues.

We'll see what happens in the coming years, as ARM is now going to the high-end performance-wise, with integrated GPUs for vector processing at scales no PC is capable of in price-performance-efficiency. AMD and nVidia will lead, but we'll also see ARM's own Mali, along with PowerVR and Adreno options show up too.

It might be something we'll see in a Kirin (Huawei Silicion) that utterly blindsides everyone with it too, especially with SoftBank continuing to improve ARM's own Mali cores.
Image
CMDR TheBS - Yet Another Middle Aged American (YAMAA) and Extremely Casual Gamer often confused for a Total Noob

LeDoyen
Expert
Expert
Posts: 342
Joined: Fri Jul 28, 2017 7:48 pm
CMDR: Le Doyen
CMDR_Platform: PC-MAC
Contact:

Re: Minimum requirements for laptop

Postby LeDoyen » Thu May 16, 2019 5:28 pm

Do you know that Intel CPUs have a SoC architecture too? since Nehalem, circa 2008

The northbridge is no more, and the motherboard only handles all the less intensive odds and ends now.
When you look for example at power consumption of any i-CPU, you'll see the package and core power.. and another one that is called "uncore" which is the part of the die that manages what used to be on the motherboard chipsets.

User avatar
thebs
Master
Master
Posts: 732
Joined: Sat Apr 16, 2016 6:49 pm
CMDR: thebs
CMDR_Platform: None Specified
Contact:

Re: Minimum requirements for laptop

Postby thebs » Sun Jun 02, 2019 9:19 pm

LeDoyen wrote:Do you know that Intel CPUs have a SoC architecture too? since Nehalem, circa 2008
Nehelem is NOT SoC! Do you understand what SoC is?!

Nehelem just added a switching, system interconnect with NUMA, removing the old 'shared bus' design, which is still NOT done in hardware on Intel, and the whole reason for their current security issues too. I.e., I'm under NDA with Intel because we had their Nehelems on Wall Street in 2007 (yes, 2007).

Don't get me started on how Intel uses microcode to do things AMD does in software, as it designed a real TLB, paging unit and other things with a full I/O MMU. If AMD controlled the PC's GPU interconnect, our video would NOT be PCIe. Again, don't get me started.

LeDoyen wrote:The northbridge is no more, and the motherboard only handles all the less intensive odds and ends now.
Sigh ... the I/O Controller Hub (ICH) is still NOT on the die.

LeDoyen wrote:When you look for example at power consumption of any i-CPU, you'll see the package and core power.. and another one that is called "uncore" which is the part of the die that manages what used to be on the motherboard chipsets.
Again, you're totally confusing multiple concepts.
Image
CMDR TheBS - Yet Another Middle Aged American (YAMAA) and Extremely Casual Gamer often confused for a Total Noob

LeDoyen
Expert
Expert
Posts: 342
Joined: Fri Jul 28, 2017 7:48 pm
CMDR: Le Doyen
CMDR_Platform: PC-MAC
Contact:

Re: Minimum requirements for laptop

Postby LeDoyen » Sun Jun 02, 2019 11:00 pm

Apparently, the definition of SoC that we use in the semiconductor industry and the one you understand seem to differ yes :)

User avatar
thebs
Master
Master
Posts: 732
Joined: Sat Apr 16, 2016 6:49 pm
CMDR: thebs
CMDR_Platform: None Specified
Contact:

Re: Minimum requirements for laptop

Postby thebs » Wed Jun 26, 2019 9:28 pm

LeDoyen wrote:Apparently, the definition of SoC that we use in the semiconductor industry and the one you understand seem to differ yes :)

Ummm, you just work in a different semi-conductor industry than I do.
In fact, it sounds like a very Intel-only world, ignoring AMD, when it just comes to x86.


First off, System on a Chip (SoC) ...

SoC has been around non-x86 for a long time. PowerQIC was a common, popular one for PowerPC In the '00s. Various ARMv7 and earlier options existed as well. I'm speaking from integration experience because those are the ones that I'm most familiar from last decade with before ARMv8 this decade. I won't touch on 68K and early Power, or MIPS for that matter. Back then I was actually designing memory and system interconnect with synthesis tools. But I haven't done much since '05, I'll admit, but I've done plenty of embedded integration and evelopment.

But even some 386/486 and, later, i686 SoCs were out there (IDT/Centuar, SGS-Thompson, etc... usually carrier grade and other industries), with everything inlcuding all LPC (Legacy PC) functionality that Intel still puts in the I/O Controller Hub (ICH). I remember this because it came up when the Fedora project considered moving from i486 to i686 instruction set minimum compatibility, and always optimizing for in-line Atom (because Atom sucked if it was i686 Pro/II/III optimized). There were still a handful of integrators using 486 compatible SoC options out there.

BTW, probably the first SoC was 186EM, but that's another debate -- although splitting hairs, because the LPC was more minimal back then. But AMD dominated Intel back then, and AMD even had a better 8087 (even helped Intel design and fab theirs, long story).


Secondly, NUMA and no FSB has nothing to do with SoC. One can have a FSB in-chip and still be SoC.

I.e., What you call SoC, we call GPU, NUMA and peripheral interconnect, but not peripherals.
SoC actually includes all peripherals in our world, and with any additional peripheral interconnect is optional, not required.

E.g., Windows (GNU/Linux is another story -- although it's not really a PC any more at that stage, because it's non-PC compatible firmware) won't freak'n work without the Intel ICH, because there are still legacy PC (LPC) functions in it, no different than the Southbridge. ;)

Furthermore ...

AMD threw away the Front Side Bus (FSB) back in the 32-bit era when it created the Athlon 32-bit based on the 40-bit Alpha 21264 EV6 platform, with a crossbar switch. Eventually AMD just went with a broadcast mesh for 64-bit with up to eight (8) nodes, being 40-bit from day 1, and up'ing to the 48-bit maximum flat of x86-64 Long Mode and, later, the full 52-bit Paging Address Extensions (PAE) that x86-64 Long Mode is capable of.

x86-64 Long Mode being the i686 32-bit/36-bit PAE compatible mode, so x86-64 OSes can still run segmented i686 binaries/libraries. Without Long Mode, Windows x64 wouldn't work, as a lot of it is still 32-bit libraries. Even Canonical (Ubuntu) got a 'rude awakening' when it decided to yank all 32-bit libraries in its next release, only to find out it utterly breaks Steam, (not just WINE), etc... because so many libraries from so many games that are released for Linux are dependent on 32-bit libraries built for Windows as well. GNU/Linux might have been 64-bit 'clean' from day 1 (thanx to my colleague Jon "Mad Dog" Hall and others who got Alphas to Linus in '94), but WinForms/Win32 is freak'n so not.

All Nehelem did was follow what AMD had done almost a decade earlier, finally dumping the FSB, with one exception -- Intel never put the actual I/O MMU and other 'segmentation/protection' in hardware. That's why Intel is f'ing having exploit after exploit, because they still rely on software, unlike AMD. Nehelem also tried to address, unsuccessfully at first, not only the 36-bit PAE limitations of i686 (going to 38-bit), but the >32bit (4GiB) memory mapped I/O 'unsafe.' But it took a lot of freak'n Linux kernel hacking, and Microsoft didn't even bother with Windows x64 for awhile (it just kept I/O under 4GiB).

E.g., It's why Intel wasn't >32-bit "safe" when Intel's IA-32e (x86-640 processors first appeared, and the Linux/x86-64 kernel had to use 'bounce buffers' for any I/O mapped above 4GiB. It's a problem that still exists today, because there are only 'tricks' that the OS uses to mitigate performance hits, not actually doing it in hardware.

I know because I saw this crap under Intel NDA in late 2007 -- yes, before Nehelem was released -- because I was working on Intel pre-release engineering hardware on high speed trading at Goldman and Lehman brothers. I formally left embedded after I left Timesys (where I was previously under NDA on Atom -- I said it would fail to gain any market in embedded, and I was dead-on -- the 'netbooks' actually save it, with Microsoft licensing Windows at 1/5th the price) to join Red Hat in 2007 (and finally get away from the long-hours of work), although Red Hat still had me on a few 'custom projects,' like in trading, retail, etc... (the hours went back up -- but at least I got massive stock bonuses at times for doing those) once I joined, given my experience at Timesys, IPC Systems before that, let alone in actually doing some layout years before that.

That same, f'n 'design flaw' I saw in 2007 is still there! Intel still is relying on software. When I was working with Theseus Logic (just before they were acquired by Camgian), people I knew at API Networks (fka Alpha Processor, Inc, spun off by Digital, then part of AMD) predicted this would happen. That's why everyone in the x86-oriented embedded world was trying to support HyperTransport -- because unlike Intel, AMD actually designed a freak'n hardware safe x86-64.

Even ARMv7 added an I/O MMU and out-of-order speculative while Intel was still getting Atom up-to-speed! And I remember when ARM didn't even have any MMU, and uCLinux was a fork of the kernel, because the Linux kernel required at least a MMU.


Actual, real, Intel SoC ...

Intel first introduced an x86 SoC with later in-line Atom designs, let alone out-of-order Atom designs (the latest being the Goldmont series), ignoring really old stuff like the 386EM or even 186EM, of course (although AMD bested them there too). But the Atom SoCs were largely introduced because AMD's Processor 14h (first Socket-FT1, then 16h/FT3) was kicking its butt, badly, in both cost and performance.

I.e., Before then, AMD Geode x86 SoC were always 'well behind' Intel's leading edge solutions, so partners and integrators were willing to pay for the price/power consumption 'penalty' of Intel ICH chips usually fabbed at bigger feature sizes. But Processor 14h was basically a full-up K10 just without the L3 cache and only a single DDR channel, severely cutting down on pinout (BGA-423) and traces. Intel woke up to integrators literally ignoring Atom, and going full AMD, when they just didn't pick up an i-core (or i3-based Pentium). AMD later brought out FT3 (BGA-769) for more traces and options, which is what finally solidified AMD BGA over Atom BGA options.

Intel still really doesn't have a 'cost effective,' higher-end SoC solution today, and it's only Intel partnerships that keep it alive with the Atom designs -- which include the Celeron/Pentium-J/N products -- now known as Pentium Silver (while real i-series are Pentium Gold). AMD, on the other hand, has their Jaguar/Kabini SoCs in all sorts of systems, including even the PS4 and Xbox One consoles.

And against ARM, Atom is a joke. I still remember when the Cortex A17 hit, and my friends at Intel just dropped F-words. Atom was always behind ARMv7 developments, and definitely v8 now. Sigh ... remember when 64-bit MIPS was going to be the future end-all, be-all for everyone back in '90? ;)


So, again ...

I really don't know where you f'ing got the idea that NUMA and removing the FSB was SoC. I honestly have to question if you know anything outside of Intel, because x86 SoC has been around since the 186EM days.

SoC means you need 0 support, other than voltage regulators, capacitance, memory (and sometimes not even then, especially if some of the SRAM can be used for memory instead of cache) and the traces necessary to interface. Intel purposely doesn't do that, and -- in fact -- hurts itself because it makes it so costly because they don't.

Intel really hasn't designed much this century as far as platform goes. Again, there was a reason why we didn't use Intel in the financial world during the '00s at all, and were wary even this decade ... for good reason.
Image
CMDR TheBS - Yet Another Middle Aged American (YAMAA) and Extremely Casual Gamer often confused for a Total Noob

LeDoyen
Expert
Expert
Posts: 342
Joined: Fri Jul 28, 2017 7:48 pm
CMDR: Le Doyen
CMDR_Platform: PC-MAC
Contact:

Re: Minimum requirements for laptop

Postby LeDoyen » Fri Jun 28, 2019 5:37 pm

Nice post but SoCs are not all a one chip only solution. Many use support chips or systems are made of several SoCs. Not every company has the means to design and make their own homebrew SoC.
You talk only of computer processors, i don't :) Just look at your smartphone, or your car electronics.
You seem to work with end products, maybe more on the software side of things.. i work in front end fabs, so i understand your view about SoCs, but a "one chip does all" is not the end of it. It's the ultimate goal for some, of course but overintegrating also introduces limitations. Not everyone wants to have all in die.
I understand you like to bash intel, in great length apparently, but i care more about game performance than opinions (and i certainly don't care about server performance, VMs, financial applications, linux attempts at gaming or who shot first, AMD or Intel).
I liked the early pentiums.. i loved my Athlon XP, and AMD has been dragging its feet since the Core duo era.. only today are they at last back in the game with the latest Ryzen. If i was looking to build a new PC today, i would totally consider them.
Intel fanboys will be happy about the price drops, AMD fanboys will be happy about high end perfs. That's all there is to it for me.

User avatar
thebs
Master
Master
Posts: 732
Joined: Sat Apr 16, 2016 6:49 pm
CMDR: thebs
CMDR_Platform: None Specified
Contact:

Re: Minimum requirements for laptop

Postby thebs » Sat Jun 29, 2019 10:04 pm

LeDoyen wrote:Nice post but SoCs are not all a one chip only solution. Many use support chips or systems are made of several SoCs. Not every company has the means to design and make their own homebrew SoC.

They absolutely do, it's called fabless. I've worked at a couple myself. It's none-too-difficult to design an ASIC around various cores, including various peripherals. I've done it for aerospace as well as for high speed financial trading.

On the embedded front, in addition to Boeing, L-3, IPC Systems and others, I've also worked at TimeSys (1 of 2 early embedded/real-time GNU/Linux vendors) as well as part of Red Hat's Global Engineering Services (GES), which used to be Cygnus. I sincerly hope you know who they are, as well as my long-time colleague Michael Tiemann.

LeDoyen wrote:You talk only of computer processors, i don't :)
Come again? I mentioned working on PowerQIC, ARM and others in the '00s, let alone 68K and early Power in the '90s. You were talking Intel x86-only.

Listen, because I'm only going to say this once ...

I don't expect people to know a lot of this. But when you start pulling things out that literally make you sound foolish to someone who has an EE with semiconductor specialty, and several years experience doing actual design synthesis and layout, let alone a good decade-plus of embedded, including working directly with US, Taiwanese and other ODMs, then I don't know what to tell you.

Because you're just proving you do not, and that's why you don't recognize when I speak of rudimentary concepts and companies.

LeDoyen wrote:Just look at your smartphone, or your car electronics.

Where did you not see the fact that I mentioned I've worked on embedded and real-time VxWorks, QNX and GNU/Linux? In addition to all of the architectures I mentioned like the popular (in the '00s) PowerQIC and various ARM implements?

Did you not see that?!

So ... are you just going to be generic? Or are you going to get into the specifics?

E.g., just on cars, this decade ('10s), I've worked with GM while at Red Hat as one of Red Hat's customer-facing, embedded developers. GM has long been QNX but has been looking to get away from it long-term. Ford eventually dropped Windows and went QNX as well. Toyota and Scion have a lot of uncertified GNU/Linux devkits and has the initial quality issues to prove it.

I can go on and on, but until you actually start getting into the specifics, I'm the only one doing that here.

LeDoyen wrote:You seem to work with end products, maybe more on the software side of things..

Yes, because I haven't worked in layout and low-level fabrication since 2005. I fully stated that. But most things today don't need to be that level, because so many fabless firms are offering so many options, including FPGA on-die as well.

But I've worked with countless firms, and given them specifications for SBCs and other things since. But yes, for the last 12+ years, I've been more embedded OS and firmware development.

LeDoyen wrote:But you literally don't know what you're talking about.
Dude, you've proved you don't know anything of what you're talking about. Have you stated one, specific thing yet? No.

Meanwhile, you don't even recognize what PowerQIC or ARM are. If you did, then you'd recognize that I am talking about phones, cars, etc...

E.g., the IPC IQ/Max I helped developed the firmware for uses PowerQIC (backpack/brains) and ARM (front-end) with GNU/Linux as the RTOS.

LeDoyen wrote:i work in front end fabs,
Care to state 1 specific thing?! Remember, I have an EE with semiconductor specialization as my formal education, plus the dozen or so experience. So state something ... anything ... I might recognize!

LeDoyen wrote:so i understand your view about SoCs, but a "one chip does all" is not the end of it. It's the ultimate goal for some, of course but overintegrating also introduces limitations. Not everyone wants to have all in die.
[/quote] Of course the Voltage Regulation (VR), intermediate capacitance and other things aren't on the chip. There can also be limits with PHY interfaces and wireless on-die as well.

But, in the case of Intel, it does very much mean the ICH is in the freak'n chip -- by Intel's own definition!

Seriously, go to developer.intel.com, and you'll find the specification sheets for SoC x86 solutions. They are few and far between, but they do exist.

Dude, you have reached a level of ignorance I cannot even entertain further. I'm just sorry. You're beyond just stating incorrect definitions, but you literally haven't named a single thing.

All while missing my very explicit, specific names of products, systems, etc... You've named none.
Image
CMDR TheBS - Yet Another Middle Aged American (YAMAA) and Extremely Casual Gamer often confused for a Total Noob


Return to “Hardware and Technical”

Who is online

Users browsing this forum: No registered users and 21 guests

i