i want to make a camera system where you can see different parts of your car, in a UI app. like maybe when im worried i'm hitting a rock while offroading, i'm able to see how close i am to it in a UI app so that i dont have to move out of the camera im already in, like maybe the driver camera
Yeah it would open so many doors for modders to allow multiple cameras in the scene at once (well at least to be used by modders). The game engine can already generate cube map reflections (dynamic reflections) which literally renders the scene at multiple locations in a single rendering pass. So it isn't a technical challenge to allow placing multiple cameras in the scene at once.
If it allowed it the game would be forced to re-render the whole map for each camera separately, I imagine it would be not only incredibly laggy but also extremely buggy
Dynamic reflections, aka cube map reflections rely on rendering the scene 6 times from angles 90 degrees from each other every single frame. And the game already gives you options to scale down the detail of the dynamic reflections to improve performance, so it's definitely feasible.
Oh, I didn't know that works like this. Maybe someone can somehow make a camera system using dynamic reflections then?
Well you could but it would look pretty terrible and it would be inaccurate. This is because it creates the cube map reflections at the center of the vehicle (I believe) and so it wouldn't actually show the scene at the back of the vehicle for a back-up camera for example, but rather what you would see if you were to look towards the rear of the vehicle at the center of the vehicle. --- Post updated --- It looks like BeamNG.tech (version of BeamNG.drive for academic/industry purposes) is able to place multiple cameras in the scene at once (idk why they don't include this in the base game): https://documentation.beamng.com/beamng_tech/sensors/camera/
not necessarily. render to texture reflections have been used in games for a long time, if you "optimise" it enough (limit number of seperate rtt reflections in one scene, tune the amount of detail thats drawn and rendered for the texture and adjust the resolution of the texture) its a powerful option for accurate (only on flat surfaces) reflections even today... look at hitman 3 it uses render to texture for many glass and mirror surfaces (in combination with SSR). Today we have real time Raytracing which has many advantages over rtt. mainly it is more performant in scenarios where a lot of reflections are in one scene (depends a bit on the hardware used) also the reflections are more accurate for non flat surfaces and can self reflect (and other interesting light physics). For fully ray traced renders you dont even have to have different tech for different effects ambient occlusion, reflections, global illumination, shadows... since all of it is simulated using the ray tracer. What i was trying to say before i got derailed into talking about ray tracing and its benefits is that having multiple render to texture "Cameras" in one scene isnt really an issue as long as you are intelligent about it... for example angelo234s great driver assistance mod has a revesing camera, if you wanted to have this camera on the screen of the for example the etk 800 you wouldnt need a super high resolution maybe 512p is enough and you wouldnt need to render far away scenerys, since it looks down on the street, the fps impact would be rather minimal id think...
I've made a similar argument like this for over a year ago or so when back viewing mirrors where discussed, they would exploit the same technique/system. I think the problem lies in the fact that the current game code is far too nested to take this side track at this point. In my opinion the devs had no intentions on implementing this in the early stages of development and as a result all the code is now focused on one single view port. And maybe the physics simply just don't allow for a combination with multiple view ports (considering the amount of low end pc consumers) while using current game engine(s) (c++,t3d) ? Still not sure, are there any t3d applications existing to prove different ? One (like me) might want to look into this again --- Post updated --- updated : What you see above is from a GitHub page with a latest commit from 2013 containing some community requests. I've also quickly glanced at some release notes from the past years and it seems like this still isn't happening yet as of now. Will it ever ? Idk
I think it’ll happen eventually, at the end of the day even if it doesn’t you can just position a mesh with a mirror texture and turn the reflections on.
Multiple viewports wouldn't be used for implementing this, unless you wanted to make like a split screen view (doesn't necessarily have to use multiple viewports either). How it would work is as follows: Render the scene from the auxiliary cameras and store their images in a frame buffer object in VRAM Apply those images onto the surface (e.g. mirror face) (basically the same way DDS images are pasted onto objects' surfaces) Render the scene from the main/player's camera
As far as my understanding goes an auxiliary camera = a view port ? Once you have more than 1 camera view on your screen the game/program is actually rendering the world at least twice so.. no matter how to call it imo, if the game engine hasn't got the facilities it just can't be done, otherwise it would already been applied either by the devs or by modders. Correct me if I'm wrong. I does makes me wonder though how all these other games manage to do this so easily. Even those '80's car games are rendering back view mirrors and split screens (and also replay ghosts ). I'm not a real wizard on game and graphics engines Might it have to do with the fact that the licensed game or graphics engines are expensive and just more capable (even at the low resource level) ?
A viewport is a defined space (x,y,width,height) in the application window used for drawing the scene or anything else. The camera is just an abstraction for a position and orientation in the scene/map to render the scene/map at. For games that only use one camera, the scene is rendered at the location of the camera and is then displayed in the viewport. And for games that use multiple cameras, the scene is rendered 'x' number of times where 'x' is the # of cameras. Although it really depends on the context of the game for how the rendering is done. For example, for a game that uses a "splitscreen" display to display four cameras' images, it could either render the four cameras' images to four viewports or it could render all of them to one viewport to mimick a split-screen display. How it would be done for rendering to one viewport would be by rendering the four cameras' images to four frame buffer objects (storage device for a camera's render/image). And then you would have four quad meshes located in the quadrants of the viewport which you would set the texture of using those images stored in the frame buffer objects, and then perform a render pass to render directly to the single viewport. And for a game that has a "picture in picture" system like Euro Truck Simulator for example for rendering the mirrors, one viewport is only needed. And so first, the mirrors' textures I believe are rendered from auxiliary cameras (not proper terminology, but I just call them that since they don't draw directly to the screen) whose orientation are set to the position and normal vector of the mirrors' faces and the textures are stored in frame buffer objects. And then those mirrors' textures from the frame buffer object are applied onto the mirrors' face object. And then the player's camera would render the scene to the single viewport. And this amounts to a total of 3 cameras (player cam, two mirror cams) And so in BeamNG's case, it actually is using multiple cameras already. And they are being used for the "dynamic reflections" (cubemap reflections) which utilizes 6 cameras other than the player's camera so a total of 7. The diagram below shows how it works. Though in BeamNG's case it wouldn't be the skybox that is being sampled, but the environment surrounding the vehicle (Object) instead and is done so using the main camera's direction vector. And so the sides of the cube are generated by rendering the scene at the center of the vehicle's position at the 6 different angles pointing towards each side of the cube every single frame (well actually depending on user settings). The problem with this is that the reflections are an approximation and it really shouldn't be used for mirrors, because it doesn't actually show the true perspective of the mirror. Hopefully this explanation wasn't too long and convoluted but I just wanted to clear some things up. So TLDR, BeamNG already uses 7 cameras total with dynamic reflections so turning off dynamic reflections and rendering the scene with a few more cameras is no problem at all.
No not all, I've read your entire message with great interest So do you believe that those multiple cameras used for dynamic reflections can be used for 'true' ( even if frame buffer object(s) ) rendering of different angles ? And would one have to give up dynamic reflections or could the extra view points perhaps be additional ? I would love to see some dev input on this subject too by the way.