Separate names with a comma.
Discussion in 'Computer Hardware' started by BlueScreen, Jan 25, 2015.
Gaming benchmarks are the best real world comparison we have for hardware.
Before everyone loses their mind, @redrobin is absolutely correct, so long as you're gaming, and not playing BeamNG. BeamNG doesn't act like most other games, as it has a frankly awful graphics engine, with thread driven physics simulation. Cites: Skylines is going to be the closest common analog to BeamNG. Skylines is still one of the best CPU gaming benchmarks around. But most games aren't reliant on CPU performance. They're reliant on GPU power. Different GPUs with different architectures perform differently in different use cases. The Radeon VII is not the be all and end all of $700 GPUs, but it is the best for memory intensive workloads. Bulldozer was incredibly good at emulation, but was a dog at gaming. If you're going to be using a system for playing BeamNG, there are plenty of people here who can tell you what you should buy, however it probably won't line up with more mainstream games. And gaming benchmarks are the best benchmarks for people who game. Just as scientific benchmarks are best for science stuff. Look at benchmarks that matter for your use case, not just whatever you want.
Gaming benchmarks are good for those settings used, change resolution poof situations is different, hence better hardware review sites show benchmarks with different resolutions and graphics settings even when doing CPU review.
There are surprisingly lot of bad graphics engines / less than perfect optimizations, usually no problem, but when running game with 144Hz monitor all of sudden poof situations changes and again 'common truth' is no truth anymore.
Tends to be that people like easy, it is far easier to generalize and take some review as granted without going indepth and analyze how that result scales with own intended use case, but I see so much of all kind of common beliefs spoken as only truth, that it is horrible, what you say I agree with, your view is much more sane that many I see.
Radeon VII is that GPU that is meant for datacenters computing use and rebranded for some gamers, which Buildzoid mentioned being not even an attempt to compete at high end market, but being more of easy way to make little money. It is surely enough good for lot, but I think he nailed the truth of why it exists, because existence of it did not make much sense for him.
ETS2 with max graphics, Farming simulators, BeamNG, Raceroom, etc. etc. there are indeed lot of games that just need high single core performance, while BeamNG can use ton of cores, it still is limited by single core capability of how many cores it can actually use, that is when GPU is not becoming limiting factor.
Sadly people just don't have knowledge of these matters, they assume, because general truths are different, I must be wrong, well not quite so, general truths don't apply, there are no gaming use, I think Funky7Monkey has this point too, there are specific use cases, no general gaming use as all games are different, one must know the game and what it needs in specific use himself is going to have.
Or be at mercy of marketing and hype, which most do.
Well, such way these things run.
Even if you are playing BeamNG, gaming benchmarks are still quite relevant.
If you benchmark a bunch of Cpu's in tomb raider, then also benchmark them in BeamNG, you won't see a perfect correlation*, but you will see quite a significant one.
I would also add that they are far far more relevant than single threaded synthetic benchmarks.
*specifically once you get to high core count cpu's
Yeah, single core synthetic benchmarks are misleading as turbo drops off when more cores are loaded, but still there clock speed and IPC matters more than number of cores, imo because most CPUs of today used in gaming setups have more than enough threads, but not too many maintain high enough single core performance on lower turbo clocks for every use.
So single core performance very much matters, however you don't get advertised single core performance unless you overclock your cpu.
Hence my issue with ridiculous 95W TDP design limits that keeps all core turbo so low, if CPU would be designed for 125W with larger die area, it would have no problem running much higher frequency.
Delidding works because of that, it improves cooling, disabling power limits works because of that and so on.
As someone who recently upgraded to a 1440p 144Hz freesync monitor, no, 144Hz doesn't change anything. Freesync does, but turn it off if you have issues. Games run exactly the same on 144Hz screens as on 60Hz screens, because it doesn't matter to games. They run at whatever frame rate, and the GPU figures out when to push frames to the monitor. With adaptive sync, the GPU will push a frame the moment it's ready, because the monitor is prepared to handle inconsistent frame times.
Yes, the situation changes when you change resolution. Which is exactly why pretty much everyone in the industry benchmarks at every common resolution.
Intel is particularly known for having very good single threaded performance. But it's good enough on both sides that it doesn't matter very much anymore, outside of older, single threaded games, and BeamNG. Gamers still only need 4-6 cores. Anything more than that isn't a "gaming" CPU anymore. It'll cost too much.
Side note: I suspect you are having some pretty significant thermal issues, or power delivery issues, if you're not reaching the clock speeds you desire.
You do get the advertised single threaded performance, without any overclocking or fiddling. If a workload is using more than one thread, it is a multi threaded workload. It is not a single thread workload. Hence a single threaded benchmark is no longer an appropriate measure of performance, if of course your workload uses more than one thread.
Single threaded benchmarks do not test how fast each thread is when working simultaneously in a multithreaded task. They test performance when one thread is active, and all others are (mostly) idle.
Games usually tie LUA on fps, you double FPS, you double single core load from LUA, so it depends from game, but it does matter as many games use LUA these days in such way, like BeamNG too.
I'm quite certain that CPUs 1 year after next summer offer plenty of single core performance, it is already good situations but not the best.
I want to run 5Ghz, I can't because crap TIM, because of crap VRM and so on, die can do it fine, but issues. 4.3Ghz is not quite enough for me, 4.6Ghz helps surprisingly a lot, can't go higher really as issues, VRM mostly, but also TIM starts to limit really fast, so delid would be needed, but die, that would do just fine.
I really dislike designs of 8th gen and also 9th gen Intel for that reason and Zen2 is supposed to be better in this regards.
Nope, even sitting on windows desktop gives drops in clocks as there are always something else that uses more threads, even if my program would use only one thread, sucks, compared to 6th gen Intel it sucks big time as that gets max turbo when playing BeamNG with single car, that gets advertised single core performance.
But maybe what I want is as fast multithreaded as single threaded thread speed, when mostly 1 or 2 cores are working? In original taskmanager graph you can see how all that work could of be done with 1 or 2 threads just fine, however as there are miniscule workload on other threads it drops turbo clocks, this does not happen with 6th gen Intel or earlier either Afaik.
With overclocking that can be easily remedied of course, but drop off is too extreme by stock settings.
That depends hugely on how things are synced.
Video games can vary in this regard. However many do care about what frame rate they are pushing.
It can get complex depending on the exact game engine architectures and how they handle threading. But if you have any form of sync enabled (or frame limiting) , your games fps is locked to your refresh rate. This can (but not always) also include things like mouse and keyboard inputs, game physics, all sorts really. Depends on how the game loop and scheduling are implemented.
Meaning at 20fps,some games will only process 20 mouse inputs per seconds. Or may only calculate collisions 20 times per second. Which can lead to problems with objects phasing through each other.
This can form a part of why some competive games will provide a competitive edge to the players with the highest frame rates.
I think your testing methodology is flawed. Turbo boost is not designed to work the way you're showing, different cores will run at different clock speeds. As it's currently set up, stock, all cores will not run at 5GHz all the time. You'll also want to use a utility that shows the clock speed of each individual core, task manager doesn't do that. MSI Afterburner does (it's what I use). If you want 5GHz on all cores all the time, you will have to overclock (at your own risk), and will need sufficient cooling (TIM isn't the only factor), and sufficient power delivery. Turbo boost shouldn't cause the entire CPU to drop when a minor load is put on the slower cores. If it is, as you're suggesting, there's a significant problem with your CPU and/or motherboard.
Another thing that makes it difficult is that FPS counters tend to increase strain on CPU with higher FPS, I have mostly tested with BeamNG, but you get drop in fps when you activate fps counter and GPU is not limiting fps.
I would think that any game has things that cause X amount of CPU load per each FPS, there probably is no way around of that, CPU load increases with FPS, but it can be enough small that it never shows up, especially with systems where GPU is limiting factor, for example running less high graphics, that certainly is not such issue in practice, but yeah, it again depends from situations, from game, way of use, other stuff running etc.
For me it looks like that you did not understood what I attempted to say.
None is 5Ghz at this moment, CPU load is miniscule and this is running with multicore enhancement which actually manages to keep cores at higher clocks, put more load, clocks drop more, currently to 4.6Ghz as I set that to be minimum, stock is 4.3Ghz minimum:
There are numerous articles about this issue, especially evident with 8086K as only 1 core is 5Ghz, then it drops a lot with 2 core already and any kind of load prevents it from being at 5Ghz, 8700K has less of issue as it does not go so high with 1 core loaded.
It is how these work, don't believe me, you can find same from any reliable hardware review sites, but be aware how Intel is today.
I don't think you're aware of the issues associated with Intel's implementation of power management.
EDIT: It's also to be noted that Intel does not rate their CPU TDP correctly, and haven't for a few generations now. Your problems are power delivery related. Get a better board.
Also, testing ANYTHING with Beam is a flawed methodology in and of itself. The physics core is nowhere near fast enough in its current state to be relevant for any CPU testing whatsoever.
redrobin, I think you are presuming too much.
Asus ROG is crappy board, have found that out, everything under 350 euros or so is useless actually.
I'm topping out 106W CPU power usage (that is Blender benchmark) and no power limits are becoming active. Temps, well, I think 64C is highest I have seen.
Even 1% load on any of the cores counts and causes reduction in clock speed, newer exceeding 40W and having lowest turbo frequency already, CPU temps under 50C.
6th gen you could have much more load on a core until it was counted to be enough for turbo clocks to get lower.
Not sure what part of that you don't understand, but for example if with 6th gen you get 5% load on each of cores, there still was maximum turbo, with 8th gen you get minimum turbo, with 8th gen even 1% counts and 1% on each of cores lowers clocks to minimum, so idling at desktop with browser open drops cores to 4.8 to 4.6Ghz, without browser it hold 5Ghz on most of the cores. Setting balanced power plan drops clocks to 800Mhz while high performance power plan keeps clocks at maximum, they don't get higher when CPU cores get more load.
Issues are not power related, it is how this CPU is designed to work, educate yourself before making too strong assumptions about my knowledge on the matter, you can read it here, hardly ever reaches 5Ghz:
In practice it is just 8700K with bit better silicon and only theoretically higher turbo speeds.
It is how Intel stuff works, compared to 6th gen, 8th gen drops clocks very aggressively.
BeamNG is great for testing how BeamNG will run, same is true with every game.
But hey, don't believe me, go out there, buy the hardware and run it, then you can see it yourself, so you can believe.
For me it appears that you don't know how intel rates TDP, 95W is for constant use which intel guarantees with base clocks, turbo clocks are then temporary power limits like 125W for 30 seconds, they are not rating their TDP wrong, it is just all kind of enthusiast not knowing their stuff spreading silly beliefs, Intel rates their CPUs correctly, but people don't understand their rating system.
Now as you can read from the article, they too mention that higher TDP would be nice, yeah, it would be nice, like would be larger die surface area, die itself would be perfectly fine for 5Ghz all cores, but they made design choice starting from 8th gen that their chips will have nice power in benchmarks and in theory, but in practice base clock benchmarks would be what performance they guarantee to deliver.
Turbo boost has been castrated when they added more cores and did not increase their power ratings and only by overclocking you are getting what chip can really do.
I don't know why such basic knowledge is missing here and all kind of assumptions are made instead, buy the hardware, test it and learn something instead of presuming and assuming.
And there are those who are stable at 5GHz all day long. The only person who doesn't know what's going on here is you.
Your issues are power delivery. Get a better board.
Again you are assuming things that don't exits.
It never occured to you that I'm not even trying to run at 5Ghz all cores here?
Why don't you read again and try to understand, instead of trying to misunderstand best you can or then don't reply with such useless nonsense?
Okay, maybe I need to explain like really really simple way:
I am talking about how intel could of easily made 8th gen CPU to have 5Ghz all cores out of the box
I am not talking about me not be able to overclock 5Ghz all cores
I am talking about how Intel's decisions are artificially reducing CPU stock speeds, so that advertised 5Ghz is not happening, because their less than optimal choices.
I am still not talking about overclocking all cores to 5ghz or about unable to overclock all cores to 5ghz
My problem of overclocking is something you made up in your own mind, which exists only in your mind and is not based on this reality.
--- Post updated ---
Also one question I got is if people here have difficulties to understand discussion if there are more than one subject being discussed at same time?
I have heard some people can have such brain that they can't really well tell which subject is being discussed and need only one subject at time, maybe that is something to do with lack of understanding of what is being discussed here?
If Intel's advertising is wrong, sue them. Although, it's a bit late for that, you've voided your warranty already.
Why would they? The power draw for that to reliably hold 5GHz would be more than some consumers would be comfortable with. And why does clock speed matter? The megahertz wars are long over. The only people who overclock now are enthusiasts. 5GHz is nothing but a marketing buzzword now. There are few applications outside of gaming that are so poorly threaded that manycore CPUs would be worse than a small core count at 5GHz. And for the record, Piledriver is the fastest in the world. (Sources: https://hwbot.org/benchmark/cpu_frequency/halloffame https://valid.x86.fr/records.html)
There is really lot of single threaded code, even within Windows. We are not in multi-threaded heaven yet even that is touted as truth by believers, that takes years still, but it is good more and more has moved to that direction.
Pretty much anything industrial is heavily relying single core even today and for unforeseen future.
Pinnacle of technology, getting best out of what they build, certainly you would except them doing best they can instead of some half arsed job. Potential processing power is quite bit more than what people are getting out of the box, it is not the gigaherz, but processing power per thread that CPU is capable of, more the better, number of threads is then in same way cheat as you can't thread code freely, it has limits and increasing threads CPU can do gives quite little after certain rather variable point (depends from software/game).
Thankfully software is moving away from such limiting aspects and thankfully thanks to AMD there are more of CPU race happening, but don't you think it is kinda lame from Intel to not pushing technology for the limit during all these years?
They could easily gone much further.
I though that you can have general discussion about Intel CPU design choices, but instead many are inclined to kill the discussion by derailing, trolling, giving very bad and expensive advice without understanding what is being discussed about etc. I can see why people view this forum as toxic.
Even Blender uses just one thread when rotating view, but even that one thread drops turbo to minimum, load is on single thread, certainly not power related drop, that is just how turbo works on 8th gen, try it yourself if not believing.
Excuse me, who is doing that? I don't see that going on here. How can you complain about "very bad and expensive advice" when you bought one of the most irrelevant and overpriced CPUs of the past generation?
It doesn't need to be on the fastest equipment, though. Pretty much everything industrial runs on legacy equipment because that's what works.
FYI can't understand half of what you're trying to say here, but I'll try. The best Intel can do on the consumer side is the i9-9900K, enthusiast side (and something I can't recommend at all) is the i9-9980XE and 9990XE. Intel is only making these 3 parts because of Zen's pricing and scalability. Intel doesn't need to make the best because the majority of consumers don't realize AMD is competitive. They can get by on their name alone.
Intel can get by because they have the CPU's that are fastest in gaming workloads, which is a sizeable proportion of the market of people who care, and the ability to twist the shoulders of OEM's due to their dominant position, putting their CPU's in everyone elses hands. Not to mention their low power Atom and CoreM CPU's in super portable devices that are quite popular these days.
And of course, as you say, a little brand recognition certainly doesn't hurt them.
That said, I'm not sure that the average consumer cares what CPU a laptop has, from walking into a computer shop, it seems that they are trying to push computers based on quantity of RAM alone
So my friend put in a new GPU put into my computer (since I'm so bad a tec stuff can even put in what is basically a plug and I din't wanna screw it up) and he said to me that my hard drive should have gone a long time ago. So at this point I'm stuck. I got a old computer from 2014 with a old worn out hard drive and a I5 cpu. Should I just get the hard drive replaced or say screw that and look for a new computer? (I can't build a computer so that is out of the option.)