https://www.tomsguide.com/gaming/playstation/sonys-mark-cern...
Next is UDNA1, a converged architecture with it's older sibling, CDNA (formerly GCN).
Like, the article actually states this, but runs an RDNA 5 headline anyways.
Whats to stop sony being like we dont want UDNA 1, we want a iteration of RDNA 4.
For all we know, it IS RDNA 5... it just wont be available to the public.
"Big chunks of RDNA 5, or whatever AMD ends up calling it, are coming out of engineering I am doing on the project"
CDNA was for HPC / Supercomputers and Data center. GCN always was a better architecture than RDNA for that.
RDNA itself was trying to be more NVidia like. Fewer FLOPs but better latency.
Someone is getting the axe. Only one of these architectures will win out in the long run, and the teams will also converge allowing AMD to consolidate engineers to improving the same architecture.
We won't know what the consolidated team will release yet. But it's a big organizational shift that surely will affect AMDs architectural decisions.
CDNA is 64 wide per work item. And CDNA1 I believe was even 16 lanes executed over 4 clock ticks repeatedly (ie: minimum latency of all operations, even add or xor, was 4 clock ticks). It looks like CDNA3 might not do that anymore but that's still a lot of differences...
RDNA actually executes 32-at-a-time and per clock tick. It's a grossly different architecture.
That doesn't even get to Infinity Cache, 64-bit support, AI instructions, Raytracing, or any of the other differences....
It seems that we are the stage where incremental improvements in graphics will require exponentially more computing capability.
Or the game engines have become super bloated.
Edit: I stand corrected in previous cycles we had orders of magnitude improvement in FLOPS.
Don't forget one reason that studios tend to favour consoles has been regular hardware, and that is no longer the case.
When middleware starts to be the option, it is relatively hard to have game features that are hardware specific.
This doesn’t affect me too much since my backlog is long and by the time I play games, they’re old enough that current hardware trivializes them, but it’s disappointing nonetheless. It almost makes me wish for a good decade or so of performance stagnation to curb this behavior. Graphical fidelity is well past the point of diminishing returns at this point anyway.
Compare PS1 with PS3 (just over 10 years apart).
PS1: 0.03 GFLOPS (approx given it didn't really do FLOPS per se) PS3: 230 GFLOPS
Nearly 1000x faster.
Now compare PS4 with PS5 pro (also just over 10 years apart):
PS4: ~2TFLOPS PS5 Pro: ~33.5TFLOPS
Bit over 10x faster. So the speed of improvement has fallen dramatically.
Arguably you could say the real drop in optimization happened in that PS1 -> PS3 era - everything went from hand optimized assembly code to running (generally) higher level languages and using abstrated graphics frameworks like DirectX and OpenGL. Just noone noticed because we had 1000x the compute to make up for it :)
Consoles/games got hit hard by first crypto and now AI needing GPUs. I suspect if it wasn't for that we'd have vastly cheaper and vastly faster gaming GPUs, but when you were making boatloads of cash off crypto miners and then AI I suspect the rate of progress fell dramatically for gaming at least (most of the the innovation I suspect went more into high VRAM/memory controllers and datacentre scale interconnects).
Like all this path tracing/ray tracing stuff, yes it is very cool and can add to a scene but most people can barely tell it is there unless you show it side by side. And that takes a lot of compute to do.
We are polishing an already very polished rock.
I agree that 10x doesn't move much, but that's sort of my point - what could be done with 1000x?
One potential forcing factor may be the rise of iGPUs, which have become powerful enough to play many titles well while remaining dramatically more affordable than their discrete counterparts (and sometimes not carrying crippling VRAM limits to boot), as well as the growing sector of PC handhelds like the Steam Deck. It’s not difficult to imagine that iGPUs will come to dominate the PC gaming sphere, and if that happens it’ll be financial suicide to not make sure your game plays reasonably well on such hardware.
And given most of these assets are human made (well, until very recently) this requires more and more artists. So I wonder if games studios are more just art studios with a bit of programming bolted on, vs before with lower res graphics where you maybe had one artist for 10 programmers, now it is more flipped the other way. I feel that at some point over the past ~decade we hit a "organisational" wall with this and very very few studios can successfully manage teams of hundreds (thousands?) of artists effectively?
Many AAA engine's number one focus isn't "performance at all costs", it's "how do we most efficiently let artists build their vision". And efficiency isn't runtime performance, efficiency is how much time it takes for an artist to create something. Performance is only a goal insofar as to free artists from being limited by it.
> So I wonder if games studios are more just art studios with a bit of programming bolted on.
Not quite, but the ratio is very in favor of artists compared to 'the old days'. Programming is still a huge part of what we do. It's still a deeply technical field, but often "programming workflows" are lower priority than "artist workflows" in AAA engines because art time is more expensive than programmer time from the huge number of artists working on any one project compared to programmers.
Just go look at the credits for any recent AAA game. Look at how many artists positions there are compared to programmer positions and it becomes pretty clear.
If “realistic” graphics are the objective though, then yes, better displays pose serious problems. Personally I think it’s probably better to avoid art styles that age like milk, though, or to go for a pseudo-realistic direction that is reasonably true to life while mixing in just enough stylization to scale well and not look dated at record speeds. Japanese studios seem pretty good at this.
It's no wonder nothing comes out in a playable state.
Even Naughty Dog went with their own LISP engine for optimization versus ASM.
My understanding is that the mental model of programming in PS2 era was originally still very assembly like outside of few places (like Naughty Dog) and that GTA3 on PS2 made possibly its biggest impact by showing it's not necessary.
I am a bit uncomfortable with the performance/quality stuff that people have set up but I personally feel that the quality floor for perf is way higher than it used to be. Though there seem to be less people parking themselves at "60fps locked", which felt like a thing for a while
Cyberpunk is a good example of a game that straddled the in between, many of it's performance problems on the PS4 were due to constrained serialization speed.
Nanite and games like FF16 and Death Stranding 2 do a good job of drawing complex geometry and textures that wouldn't be possible on the previous generation
It’s also completely optional in Unreal 5. You use it if it’s better. Many published UE5 games don’t use it.
Small RAM space with the hard CPU/GPU split (so no reallocation) feeding off a slow HDD which is being fed by an even slower Bluray disc, you are sitting around for a while.
The only thing I value is a consistent stream of frames on a console.
From PS5 Pro reveal https://youtu.be/X24BzyzQQ-8?t=172
Excessively high detail models require extra artist time too.
The path nowadays is to use all kinds of upscaling and temporal detail junk that is actively recreating late 90s LCD blur. Cool. :(
https://www.gamespot.com/gallery/console-gpu-power-compared-...
There have been a few decent sized games, but nothing at grand scale I can think of, until GTA6 next year.
"Bloated" might be the wrong word to describe it, but there's some reason to believe that the dominance of Unreal is holding performance back. I've seen several discussions about Unreal's default rendering pipeline being optimized for dynamic realtime photorealistic-ish lighting with complex moving scenes, since that's much of what Epic needs for Fortnite. But most games are not that and don't make remotely effective use of the compute available to them because Unreal hasn't been designed around those goals.
TAA (temporal anti-aliasing) is an example of the kind of postprocessing effect that gamedevs are relying on to recover performance lost in unoptimized rendering pipelines, at the cost of introducing ghosting and loss of visual fidelity.
Fully dynamic interactive environments are liberating. Pursuing them in is the right thing to do.
> Fully dynamic interactive environments are liberating. Pursuing them in is the right thing to do.
great video from digital foundry that goes into that (for doom: the dark ages)I mean, look at Uncharted, Tomb Raider, Spider-Man, God of War, TLOU, HZD, Ghost of Tsushima, Control, Assassins Creed, Jedi Fallen Order / Survivor. Many of those games were not made in Unreal, but they're all stylistically well suited to what Unreal is doing.
Your other options for AA are
* Supersampling. Rendering the game at a higher resolution than the display and downscaling it. This is incredibly expensive.
* MSAA. This samples ~~vertices~~surfaces more than once per pixel, smoothing over jaggies. This worked really well back before we started covering every surface with pixel shaders. Nowadays it just makes pushing triangles more expensive with very little visual benefit, because the pixel shaders are still done at 1x scale and thus still aliased.
* Post-process AA (FXAA,SMAA, etc). These are a post-process shader applied to the whole screen after the scene has been fully rendered. They often just use a cheap edge detection algorithm and try to blur them. I've never seen one that was actually effective at producing a clean image, as they rarely catch all the edges and do almost nothing to alleviate shimmering.
I've seen a lot of "tech" YouTubers try to claim TAA is a product of lazy developers, but not one of them has been able to demonstrate a viable alternative antialiasing solution that solves the same problem set with the same or better performance. Meanwhile TAA and its various derivatives like DLAA have only gotten better in the last 5 years, alleviating many of the problems TAA became notorious for in the latter '10s.
It's more similar to supersampling, but without the higher pixel shader cost (the pixel shader still only runs once per "display pixel", not once per "sample" like in supersampling).
A pixel shader's output is written to multiple (typically 2, 4 or 8) samples, with a coverage mask deciding which samples are written (this coverage mask is all 1s inside a triangle and a combo of 1s and 0s along triangle edges). After rendering to the MSAA render target is complete, an MSAA resolve operation is performed which merges samples into pixels (and this gives you the smoothed triangle edges).
The games industry has spent the last decade adopting techniques that misleadingly inflate the simple, easily-quantified metrics of FPS and resolution, by sacrificing quality in ways that are harder to quantify. Until you have good metrics for quantifying the motion artifacts and blurring introduced by post-processing AA, upscaling, and temporal AA or frame generation, it's dishonest to claim that those techniques solve the same problem with better performance. They're giving you a worse image, and pointing to the FPS numbers as evidence that they're adequate is focusing on entirely the wrong side of the problem.
That's not to say those techniques aren't sometimes the best available tradeoff, but it's wrong to straight-up ignore the downsides because they're hard to measure.
In the past, MSAA worked reasonably well, but it was relatively expensive, doesn't apply to all forms of high frequency aliasing, and it doesn't work anymore with the modern rendering paradigm anyway.
I feel like finally they are turning the corner on software and drivers.
If you’re making a PS game you’re already doing tons of bespoke PS stuff. If you don’t want to deal with it there are plenty of pieces of middleware out there to help.
Honestly these “where’s Vulkan” posts on every bit of 3D capable hardware feel like a stupid meme at this point as opposed to a rational question.
Maybe they should just ship DX12. That’s multi-platform too.
Honestly any idea that defends NIH like this belongs with dinosaurs. NIH is a stupid meme, not the opposite of it.
That is how good Khronos "design by commitee APIs" end up being.
However please don't undersell what they got right. Because what they got right, they got _very_ right.
Barriers. Vulkan got barriers so absolutely right that every competing API has now adopted a clone of Vulkan's barrier API. The API is cumbersome but brutally unambiguous compared to DirectX's resource states or Metal's hazard tracking. DirectX has a rats nest of special cases in the resource state API because of how inexpressive it is, and just straight up forgot to consider COPY->COPY barriers.
We also have SPIR-V. Again, D3D12 plans to dump DXIL and adopt SPIR-V.
The fundamentals Vulkan got very right IMO. It's a shame it gets shackled to the extension mess.
Microsoft adopting SPIR-V as DXIL replacement is most likely a matter of convenience, the format got messy to maintain, is tied to an old fork of LLVM, and HLSL has gotten the industy weight, even favoured over GLSL for Vulkan (which Khronos acknowledge at Vulkanised 2024 not doing any work at all, zero, nada), so why redo DXIL from scratch, when they could capitalize on existing work to target SPIR-V from HLSL.
DXIL is also a useless intermediate representation because parsing it is so bloody difficult nobody could actually do anything with it. DXIL was, and still is, functionally opaque bytes. You can't introspect it, you can't modify it or do anything of use with it. SPIR-V is dead-simple to parse and has an array of tools built around it because it's so easy to work with.
I don't really see how the OpenCL history is relevant to Vulkan either. Khronos losing the OpenCL game to Nvidia, certainly no thanks to Nvidia's sabotage either, doesn't change that SPIR-V is a much more successful format.
[0] https://themaister.net/blog/2022/04/24/my-personal-hell-of-t...