AMD integrated M2 slots onto a new GPU design for greatly expanded memory access and improved performance. This allows an application to bypass system overhead completely for certain rendering and preview tasks.
They demo'd a traditional high-end video editing rig that was previewing and scrubbing through 8K raw footage at 19 fps. The SSG card could accomplish the same task at 90 fps. Applications must be coded for this capability but it appears they have many professional software companies lining up to buy their dev kits and integrate the tech. Retail products will be launched in 2017.
As part of this evening’s AMD Capsaicin event (more on that later), AMD’s Chief Architect and SVP of the Radeon Technologies Group has announced a new Radeon Pro card unlike anything else. Dubbed the Radeon Pro Solid State Graphics (SSG), this card includes M.2 slots for adding NAND SSDs, with the goal of vastly increasing the amount of local storage available to the video card.
http://www.anandtech.com/show/10518/amd-announces-radeon-pro-ssg-polaris-with-m2-ssds-onboard
https://amp.twimg.com/v/b080f46d-e408-4348-864e-7cc12e40d1ac
I want to greatly upset you.
Modern small Samsung 3D NAND Tri state chips allow from 100 to around 350 rewrites. So, they are very good for home PC if you use big SSD, but for something that use frequent rewrites and of big data amount it will be dead fast.
My understanding is that main goal here is to use SSDs for storing huge hash tables and such - suitable for bitcoin like things and more importantly - for governments to crack encryption and check huge number of passwords.
I agree that if an application is using the SSD for some sort of continuous swap duty it would end up having to retire early or be replaced often. And yes, that would greatly upset me :)
I think that while editing a video project, as long as the source files are not constantly being recopied to the drive it should be able to provide nice read speed benefits while also maintaining a reasonable life span for things like NLE previews (this tends to slow my work down more than anything).
How exactly render previews would work when video FX have been added seems uncertain. If an NLE were optimized to do the bulk of it's work on GPU (Open CL?) I imagine the preview benefits could be sustained with minimal impact to storage. If real-time FX cannot be rendered by the GPU alone (legacy plugins) then the CPU would need to get involved and carry some of the load. This could potentially bottle-neck the process to some extent. I would hope that NLE developers take this into account if/when they start to support these cards.
GPU is badly suited for most algorithms.
I am sure very soon we will see full repeat of CPU situation in GPU camp. Actually already it is smartphone owners who mostly pay for ability to make next GPU (read modern 14nm process).
Most probably issues on smartphones market (to be more precise - issues of Apple and Samsung) will have huge consequences on semiconductors and it can be last straw.
It looks like you're new here. If you want to get involved, click one of these buttons!