BeamNG needs Nvidia's Deep Learning Super Sampling and/or AMD's FidelityFX Super Resolution to improve how BeamNG runs. Back story: My system is able to handle cyberpunk 2077 at almost fully max settings with DLSS, yet struggles to run beamNG on higher settings! Why dlss? DLSS uses Nvidia's highly sophisticated AI to render high quality images, with each version adding even greater improvements! DLSS V3 can even add in new frames, and uses AI to provide much higher quality and FSR and more accurate colors. The main flaw is that it is only available on RTX graphics cards, so its less accessible compared to FSR. A sample of dlss performance gains: https://www.youtube.com/eGHjP9zq53w Why FSR FSR can run on anything from your grandma's computer, to the latest and greatest water cooled $10,000 pc. This method is slightly less sophisticated than DLSS and provides a slightly lower image quality, notably inaccurate and washed out colors. Sample of FSR performance gains: https://www.youtube.com/artc5DHyxmc Why both? FSR tends to provide a lower image quality than DLSS, but can run on anything. It is for this reason I believe both should be added, FSR for those on AMD gpus and lower end systems, and DLSS for those with newer Nvidia gpus, but I can totally understand why FSR would be the priority. Conclusion BeamNG would be at a whole new level with some form of upscaler allowing for even higher quality versions of BeamNG, with less concerns of performance! While both FSR and DLSS would be nice, I believe FSR should be the priority so everyone benefits. Thanks you all for reading and I hope we get one of these in the game in a future update!
I don't wanna sound rude or ignorant or anything but I highly doubt that lol. BeamNG runs significantly easier on my system than Cyberpunk 2077 does (R7 5800X, 3070Ti). I do agree some other anti aliasing methods could be nice, but not necessarily DLSS or FSR, rather something like MSAA. Less and less new games support it for reasons which are beyond me, it's one of the best ways to fight jagged edges (at a major performance cost, sure, but it works astoundingly well).
Beamng does have a problem with AMD gpus currently where it cannot use them at 100% but FSR wouldn't fix that
I will admit I slightly exaggerated to help get my point across, but my computer can smoothly run Cyberpunk at 60-70 FPS mostly max settings (even with RTX on!), but BeamNG also runs at 60-70 fps and often times drops below that. Its not really that beamNG struggles to run, its that it should run better.
While I agree that it would be a welcome if not necessary QOL addition, we have to keep in mind that they are working behind the scenes on the new Vulkan render which you can already try. That will help massively in CPU-bound scenarios, of which Beam has many when running with AI. You can also try DXVK by putting d3d11.dll and dxgi.dll from DXVK on GitHub into C:\Program Files (x86)\Steam\steamapps\common\BeamNG.drive\Bin64\. On my 1080 it does not give extra frames, but might on newer GPUs.
Definitely not a CPU bound issue, I have a ryzen 9 and it hardly ever gets about 35% usage ever. I will admit to making the title a bit more dramatic to be sorta clickbaity as this thread didn't have any activity for a few days after posting under a different name, changed it and boom all of this activity
Might still be a CPU limit, since one or two cores could be maxed out while others do nothing, hence no 100% utilization. Physics calculations like to run on a single core, same with Assetto Corsa. Plus with modern CPUs the threads jump around, so it's hard to observe. EDIT: To find out, hit ctrl shift f to bring up the Performance Graph. It will show 'potential bottlenecks' in orange:
Nah it only says GPU as potential bottleneck, the %35 was for traffic and whatnot, just one car its at %10 lol
MSAA is the worst kind of AA. It literally samples the scenes multiple times, which is the least efficient method of AA, that is why it got forgotten. TAA is slightly less sharp, but its waaaay more easier on the GPU since it uses frames already generated, and it also allows upscaling if done right. We desperately need DLSS 3.0, as it can double the framerates boun by CPU (like in heavy traffic) and also helps the GPU. Also it comes in hand with DLAA that is an almost free anti-aliasing method.
MSAA is not the most elegant or the most performant AA method, no, but it just works. That's why I like it. DLSS3 is... No thank you. I'd rather not have those weird artifacts from the generated frames...
While DLSS 3 does do this, you don't actually notice it in game from my experience, its only in screenshots that "pixel peepers" notice them. So like that's not really an issue at all
I only know that Microsoft Flight Simulator runs and looks better to me without DLSS. Never have I experienced any bettering Looks Wise or Performance Wise on my AMD Ryzen 5600X and Geforce 3060Ti PC. Much more important would be an increased draw distance so the LOD is higher at a farther distance. Sometimes it only appears fully detailed when you are only 20 meter away, before that it looks horrible. This really hampers the games looks in my opinion and is easier to achieve with a more noticeable outcome compared to DLSS and stuff. A simple Slider or drop down menu. linked to the config like any other adjustment would suffice, thus creating a better tunable experience for all users!
If there is a game where DLSS 3.0 and its frame generation could be the most helpful is definitely Beamng, as this game can be heavily CPU bound if you add some traffic. I take slight artifacts all day every day if it doubles my framerate thank you.
https://www.beamng.com/threads/performance-issues-with-amd-gpus.82716/ I don't know the cause, but even if your fps are uncapped and without a cpu bottleneck beamng won't use 100% of the gpu, more like 90% in my case