Could we see possible PhysX support at a later date? It would take some of the strain off the CPU for physics rendering.
Nice. That's open-source isn't it? I guess the devs would have to pay some kind of licence to implement PhysX...
Right. But surely it has some limitation over PhysX? I mean, you can only get so much for free, right?
PhysX itself is a completely different physics engine, and it would be a downgrade compared to this one.
The thing is, even if PhysX was better, you'd be screwing people who use AMD products over. OpenCL means that both can get the experience.
For sure, but if it would be technicly possible to use the torque engine for crash stuff and physx for particles like dirt or broken windows it would be a beast combination as PhysX is much more Fps friendly what conserns bad physics of small parts. Let's say braking rocks when you crash into a wall, falling tree, etc... But as far as I know you can't have 2 engines and my point is kind of stupid
PhysX is great, but I don't think it is something for this game. What they've already made is probably much better at what they want to do. And OpenCL is firstly open source (thumbs up) and it will run on both AMD (ATI) and nVIDIA GPU's. Expect OpenCL to grow. OpenCL can actually be run on CPU's too. And I would prefer that, BeamNG is barely touching my processor with a stick. It would be cool if we could adjust where the load were put. They would though have to make to sets of code I think, at least to make it efficient. Would be interesting to get some thoughts on this by the developers.
I have an oveclocked AMD 8150 CPU and when I spawn the Burckell Moonhawk it rapes the CPU, drops it to about 20 frames. I am planning on getting a better one by intel like all out spend all my shit on a new cpu and motherboard so I don't have to worry about that kind of stuff for a 2 weeks.
Actually I think that sounds very logical. The reason for this is firstly, Torque3D uses only one core, and I am not sure if BeamNG have additional code that uses another core. So my conclusion is that Drive uses 1-2 CPU cores. EDIT: While Torque3D probably utilize 1 core, their physics engine utilize many more (at least 8 threads in total). EDIT 2: Apparently Torque3D uses multiple cores for some functions, (tdev said). Now, this game will not use all the 8 threads of you FX-8150, it'll use 1-2 cores as described above. And all AMD CPUs have very weak cores. 1 Intel core is as powerful as at least 2 AMD cores. That means that when your CPU runs at 3.6 GHz it will be like a 1.8 GHz Intel. Maybe even less, compared to Haswell architecture. EDIT again: Should use all 8 cores. But those cores aren't very powerful, so my point still stands. The advantage AMD has is that it has 8 cores, but they are weak. A 4 core Intel CPU would be more powerful. But please look out for the future, I think that AMD will get back into the game with other architectures. I hope so though, because Intel is killing the PC market. When it has no competition, it can do what it wants.
I have seen somewere that BeamNG drive is Multicore.... so if Torque3D only uses one, te rest of the game must be on another core...
I don't know this for sure, but all I know is that Torque3D use one core. But anyway, the same point stands, you need more powerful cores. And when Torque3D engine only uses one core, that could be the root of the problem. Thank you for pointing out though. But do you have a source? In that case it probably is their physics engine.
Torque3d itself is only singularly threaded but does allow end user multithreading (torque 3d is mostly C# which can use the .NET System.Threading if you wish), I think BeamNG's physics engine itself being a 3rd party extension to Torque3d is indeed multithreaded. The early beamNg demos were in cryengine which does support mono for some plugins, really' wouldn't surprise me if BeamNG itself is a C# library that is just run under mono for cryengine and referenced from either cryengine or torque3d. PhysX wise. PhysX itself is just a physics engine, nothing more. But PhysX can use CUDA where present for hardware acceleration. In theory BeamNG could use CUDA and I think that is what you meant. CUDA vs OpenCL is debated on heavily. They both do the same thing but CUDA only works on NVidia hardware whereas OpenCL works on just about any device that has a processor and can communicate bidirectionally with the host CPU, mainly graphics cards but also some sound cards, dedicated maths processors or physics processors (actually physics processors is the origins of PhysX and CUDA, NVidia bought out a company that made physics processors with CUDA and just rolled CUDA into their GPU products instead). Some benchmarks OpenCL outperforms CUDA too when run on NVidia hardware, but that is not a fair comparison as NVidia implement some parts (not all) of OpenCL ontop of CUDA (NVidia cards don't support certain OpenCL instructions so they almost run an emulator for those parts in CUDA). CUDA vs OpenCL on other platforms can't be tested effectively. Some whitepaper researchers have said CUDA should be the higher performance technology being hardware specifically for the task but its just so hard to find a way to accurately test that.
Actually, I found a post where Gabester said it. EDIT: And tdev. Thanks for the information, interesting to read.