Previous Entry Share Next Entry
Insider Daily: Sony desperately try to upgrade PS4 for dGPU now to make it like Xbox One
я
misterxmedia
Misterx: Also..where is dx11.2 promissed support from Microsoft? only dx11.1 we saw..:)
Insider: Dx 11.1+ will be getting a downgrade to dx8.1- in the one :) ...well that is what sdf would like and many might think that...

But the truth is, it is only a place holder. Wait till 29th or so I hear tgs.. As the nda will lift.

Misterx: Can you confirm PS4 is 1.8TFs? Or it is 3.2TF stacked as you previosly said?
How much more powerfull Xbox One is? Seems like xbox one is 3.7TF-4.2TF territory and PS4 will upgrade to 3.2TF

That will not gave Microsoft a KO victory and than means Xbox brand could not recover fast enought from damage Sony created with their lie and FUD...

Also, how is that possible to w2w 28nm and 20nm? Mistercteam say that is only 2.5D possibility....
Insider: Ps4 1.8tf  x1 2.4tf without cloud with cloud  2.9tf(misterx comment: suddenly it is only 2.4TF now...below he hints why he lowered this from 4.2TF as in my understanding he tryes to be Level 3 or 4 developer. level 3/4/...developers do not have access to the advanced dev kits)with respectively these numbers may not carry over in real world figures. No pain no gain.

The ps4 1.8tf is a theoretical number.. this 40% is being based on white paper tests.. ps4 does not use the same custom silicon as x1..

the dpu are 80% faster then sonys gpgpu comput units. Were as the sony gpu is 14:4 and these process are gpu depended not specialized.  they are definitely alot slower and harder to develop for as cerny has stated it will take 2-4 years to see ps4 architecture benefits.

The biggest struggle sony face are there belief the gpu can do it all. .. It has to do a lot of extreme heavy lifting to reach there aspirations. 70% of pseye processing has to be done on the gpu comput units and takes up 2 whole cores of processing.  Yes onion and garlic + buss are extremely fast. But they also do have stalls.. the fact that the cpu is crippled to 10gbs. And fanboys have this vision that you just double the buss.  Which in theory is true. But it comes with extreme optimization problems. The fact is the cpu just cant keep up with the gpu and comput load.  This is where x1 hardware really shines.  Esram on chip has allowed ms to get a 68gbs from the ddr3 but when cpu uses hsa memory coherently esram/ddr  it  has a constant 30gbs buss with 6 ops per core.  The x1 design is a hsa+ design in theory. In  nature of  hsa it has been modified alot.   Also the main soc gpu does not have to do physic's based calculations as there specialized
 process for this. There is also two geometry engines. And there is also an extra gfx core and comput units. which has its own 10mb of esram it can access independently or shared. The sound and video do not have constraints on the gpu And kinect has all its own processing externally.  .  As for w2w mistercteam  may be right. There has been alot of changes in x1 design and it is looking more and more like 2.5D. But I can honestly say I am not sure if co process are on a different silicon to the main soc at this time. Or are combined As we dont have access to final retail sdk. Even gamescom was not final ether. There is no manufacture problems as they really do have fail safes in place for bad binning.  The wired hd shots showed where the co processors should have been on the motherboard but they were not in those shots.  More will be said at tgs.. From the story coming directly from upper suits. It seems Ms may take a swing at Sony on there own turf.  :)

Insider: I see a heap of people trying to add the x1 memory bandwidth and not getting the right figure. They all seem to see that some thing is missing or not adding up. They think ms are fools.. But the real truth is right in front of them.

If they factored in why there is a quad memory design 4x 256bit.  Because each gpu and cpu has two read and wright address lines rather then have one big 512bit address line. Ms designed two 256bit gfx cores. Which each can access cpus. That is also why the cpu's are not one big clusters the cpus are split into 4 core. It would have made the cpu recourse worse over a 512bit bus and it would have coursed stalling and fragmentation. This is also why there is two swizzle main accelators / data move engines with 30gbs r/w that are solely for cpu/gpu. Do the maths with a second gpu architecture. Then it will all add up.. the system will not see 5tf performance gains. But 2.4 -3.1 is what has been said at many discussion. Which is why I have said 680 level of performance :)

Misterx: Do you still expect dGPU is there?
Insider: It is more like 2.4 - 3.1 tf..(misterx wonders dGPU only or the whole system?) which for a console with high level machine code available to developer is a staggering amount of tf.. and that ant half of what is in there as there is special dpu graphics process that handles other graphic things that pc are going struggle with... I will tell you 2.5D is still a highly effective method for stacking.. we where told w2w but that could be for the extra stuff gpu cpu gpu esram. I would hold off as I can not give you a straight up 100% truthful answer. We dont have finally sdk. Only capcom crytek ea ubisoft do .. I have seen it but not open. Back in june we saw inside the sdk and there was two apu's in the box. It looked like the wired photos.  Very much like 360 design expect the second apu was close but on its own silicon.  We were told that it would merged with the main soc w2w.. that is why the clocks were dropped from 1 ghz to 853. There was to much power draw for the chip. And the fans were always
 kicking in. So they downclocked. But they had a winning design with massive capabilites ..  not sure if both gpus are running @ 850.. I heard that one is 900mhz but I do not know if the source was authentic.   Plus 4 quad 256bit buss are a lot more efficient then one big 512bit .. specially when crossfired and multi cpu gpgpu.

Misterx: So the second dGPU will be 0,9-1,5 TF only? how is that make sence?
Insider: I really want to tell you more.. believe me.. but right now there is a massive hardware battle going on.. sony know Ms have crushed them with the x1 specs.. sony is trying desperately to get a second dgpu into the ps4. That is why the truth is starting to leak out. Sony has had no final hardware running on real ps4 hardware its all phase 3 dev kits..  sony are trying to boost the consoles hardware.  I think the gaming media will come out a lot more about ths truth ps4 is not what sony promised .. Cerny know this to.. its a console war . Sony have been very ruthlessly and time has run out.. I cant say much more NDA and ms have the army out right now . X1 is extremely powerful you dont even know the half..

Misterx: That is sounds better :)
So, no problems for Xbox One to be on pair with PCs for next 5 years? So, if no PS4 upgrade then 2-3 more powerfull still in place? :)
We need somehow to be sure MS will fast regain its ground...if that will be 1.84 vs 2.4 that will not be fast...also why BF4 dev say PC version will be better...
Insider: Its going to be on par... but it will still take coding to metal... pc is an open platform. It never stagnates.   That is were cloud comes in to x1 architecture. Because the data centers are ever evolving and so is the internet. It will allow a grasping of the pc platform ever evolving expenditure.  Of course hypothetical In aspect now for gpu culls, it is however very efficient for physics and adaptive a.i .Branch predicted light and geometry. Five years from now it will be rendering full 2tf gpus performance with ease.

When it comes to x1 and bf4 . There is going to be some comprises. Your not going to see native 4k or supper high aa or af levels that a pc with 2 or 3 7890 or 780 are going to be doing.  Its just not possible. 64player @ 60fps and 1080p will happen.. but pc is lead platform so its going to be better with a 4k tv and a 4000$ computer.  It is logical

Insider: Mistercteam is correct In his Bнимание к деталям(misterx - attention to details in russian...shocked he wrote this in russian) .. keep digging... I expect ms to start looking at you guys in a different mind now. Expect a few discrediting comments. :)

dGPU_linkedn

Page 1 of 2
<<[1] [2] >>
From the latest insider news I understood:

a) The XB1 TF will be 2.4 minimum for the whole system (not only dGPU)
b) There is a dGPU which special task loads and with extra CPU/compute cores and additional eSRAM
c) We will get some visits from MS empoyees firing shot at the forum because you guys have digged up a lot of things that MS wanted to be a secret. (Maybe this blog is the main information base for Sony to get more insight into the XB1 specs - hahahaha)

Looks great if all will be confirmed end of september. And I am not disappointed at all, I never expected TF above 3.x to be honest.

Edited at 2013-08-29 09:43 am (UTC)

As i understood insider tryied to hide his trace from MS ninjas and said only thouse numbers what Level 3 or Lever 4 developers have. Later his said - "you don't know even a half...surprises and on pair with PCc...". With 2.4Tf that is no possible even with closed box 2x multiplicator.

Edited at 2013-08-29 09:45 am (UTC)

i think your right (Anonymous) Expand

Too late for Sony to match Microsoft.

mrlookingbill

2013-08-29 10:23 am (UTC)

This is why Microsoft waited this late to reveal things. Sony ps4 are in production and are at distribution centers across the globe. The only thing they can do is present another lie and say their gpu is dgpu aswel.

Okay, just reread and came up with this theory. Tell me what you guys think.

Main Soc = 1.31
dGPU = customized high end mobile card, the HD8880 (Chinese rumor from beginning of the year)= ~1.
Cpu = Modified Jaguar >.102
Coprocessors = ~.06

so total tflops is atleast 2.5tflops, which is what epic wants

1Tf dor dGPU is too small..epic wants 2.5TF only for a tech demo...to create a game you need more as the game will be free roam camera and all dynamic.

Edited at 2013-08-29 10:46 am (UTC)

Some news from Neogaf about XB1 Kits

mrzweistein

2013-08-29 10:53 am (UTC)

Source: http://www.neogaf.com/forum/showpost.php?p=78827773&postcount=980

Albert Penello, MS:

With so many people questioning what I meant around “near final” at Gamescom, I want to clarify my own comments (typically I wouldn’t chime in on a Sony thread, since I can’t speak to what Sony is doing), but since I can’t start a new thread, I’m adding my comments here. So before this starts – I’m not here to bash Sony.

At E3, you saw Xbox One games running in a couple of environments:

Development PC’s (these were our early Alpha kits spec’d similarly to HW targets)
Development Kits (these are the white and black consoles that look like retail units)
Other PC’s (in these cases, like the drama around the PC with the nvidia GPU), we asked developers to make sure what they were showing was reflective of what could be achieved on Xbox One.

I think we were pretty open about it. Some may disagree, but I don’t recall us trying to be particularly cagey about this since it’s typical for this point in the console to have game development being scattered.

At this point, just about everything is running on “near final” Hardware. What’s unique about our program this time is that Dev Kits and Retail Kits are exactly the same.

Despite the belief, our Dev Kits DO NOT have 12gb of ram. They have 8gb, just like shipping units. So anything you see running on a black Xbox One console is the same unit we’re going to ship.

Now, the reason I say “near final” is because you guys like us to be precise. Anyone that knows HW development understands that millions of units don’t come flying off the assembly line by just flipping a switch (see what I did there?)

You have many, many units that are run on the factory production line before final product starts as you’re testing quality, tweaking the manufacturing process, etc. But all these consoles you’re seeing are coming off factory production lines. That’s why I caveat “near final” because, before you start full production, you go through many test runs.

So again to clarify – we have real, retail consoles running real code. Every time you see a black box running software, it’s the real thing. I had almost 300 people see the dash demo at Gamescom, and people were free to inspect the HW I was demoing on.

Edited at 2013-08-29 10:55 am (UTC)

Re: Some news from Neogaf about XB1 Kits

josefajardo

2013-08-29 11:11 am (UTC)

same article here : "Three Different Xbox One Dev Kits (Including PCs) Were Used To Show Games At E3 2013 Admits Microsoft " - http://www.gamepur.com/news/11893-three-different-xbox-one-dev-kits-including-pcs-were-used-show-games-e3-201.html

Just watched the oxm hands on video with the x1 on top of the console microsoft put a sticker on it...the presenter said it was because Microsoft is trying to hide specs

Re: Something intresting

mrzweistein

2013-08-29 11:10 am (UTC)

Please, can you provide a link?

Operations666, agree with you..hided that stuff :)

I'm not an engineer but if all of this is true don't you keep the info fro yourselfs so that Sony eats up all the s*** they have done and the lies they given to their gamers.Why do you expose what the Xbox One has so that Sony copies it and upgrading their system.

Re: Xbone Specs from a pro

rmb2

2013-08-29 01:13 pm (UTC)

whats? Charlie dont do a trash talk about MS?

im very impressed

the first clue to hide is of course the Power Requirement , 2nd is die size

mistercteam

2013-08-29 01:10 pm (UTC)

By looking at power requirement plus with HPM
you can approximate if MS purposely hide spec

MS said --> 100W, but if it was FCC listed as 185-200W
then they hide something

ALso remmebr HPM takes less power than standard 28nm node process
it is called HPM --> high performance mobil

2nd, is of course die size, this one is physically visible
and can approximate the die size

misterX, you deleted some comments?

it even beat Tahiti in reported same 360 mm^2 area

Tahiti 365 mm^2 --> ~4 Billion
X1 360 mm^2 --> ~5 Billion
It is 20% more dense than Tahiti

that even without what we found additional 150-175 mm^2
Imagine 170mm^2 prioritize it for ALU


2 insider said about w2w, there is also 175mm^2,is w2w really needed, --> but there is infineon

mistercteam

2013-08-29 01:45 pm (UTC)

after looking the huge die size + additional 175mm^2 area
i think (my own feeling) , MS actually dont need W2W.
they even make it more denser than AMD or Nvidia (Nvidia is more worst compare to AMD, Nvidia die size takes more area basically)

But somehow i am not dound single un lid photo (i meant
the made in malaysia on die size

so probably if someone can link some example
of made malaysia on die rather than on packaging or lid is is great

but i found 1 Foundry somehow relate to all big foundry
it is ifineon

some fact
Infineon have malaysia Foundry
Infineon relate to Glofo for 3D/2.5D stack
infineon/IBM listed for W2W

but of course the relation or fact about it still limited to connecting the dot

but it is strange that the one that have foundry (real die foundry, not packaging or Lid after tested) is Infineom

as infineon is listed on Tezzaron as W2W in the world with same tech as IBM

some link about infineon

https://www.mpp.mpg.de/~sct/welcomeaux/activities/pixel/IZM-3D-System-Integration-Technologies.pdf
"Fraunhofer IZM in cooperation with Infineon Technologies developed a wafer-to-wafer stacking technology based on low temperature bonding with polyimide as intermediate layer and a 3D metallization process which provides a very high density vertical wiring between the thinned device wafers by the use of W- or Cu-filled inter-chip vias. The so called InterChip-Via (ICV) technology is described in detail elsewhere [13]. Figure 9 shows the corresponding schematic of a vertically integrated device stack"

*) fraunhofer is Glofo partner for 20nm 3D stack

http://indico.cern.ch/getFile.py/access?contribId=2&resId=0&materialId=slides&confId=249875

*) Show Wafer level stacking is achieved only with infineon & IBM.
*) Glofo using Fraunhofer (and Fraunhover actually used Infineon tech)

So if MS actually also doing W2W, well this is will be insanely powerfull
like Insider Said above (MRx latest insider daily), Only half info there is more

of course i really think without w2w it is already quite powerfull

Edited at 2013-08-29 01:47 pm (UTC)

I suggest people read this article,explaining why same area can pack more Transistor (same node)

mistercteam

2013-08-29 02:05 pm (UTC)

http://www.tomshardware.com/news/Steamroller-High_Density_Libraries-hot-chips-cpu-gpu,17218.html

"To deliver more power efficient computations, AMD has employed a high-density cell library to reduce the area and power by 30 percent (bottom plot). The design yields a more portable and energy efficiency CPU core employing industry standard design methodologies well adapted to a foundry model. These improvements, according to AMD, are yielding a 15 to 30 percent lower energy per operation for power constrained designs, as compared to a full process node improvement."

it is all depend how good you organize it

of course it wont make pack 100% more transistor it is impossible
but 20-30% is the target

remember future trend is about efficient
-effient to use power
-utilization
-data movement
-Use die area more efficienly
and more

Re: I suggest people read this article,explaining why same area can pack more Transistor (same node)

rwilmarth

2013-08-29 02:09 pm (UTC)

right but wouldnt tahiti employ the same tech?

NDA confirmation date?

xboxfandev

2013-08-29 02:17 pm (UTC)

Is there any URL that points to September 21/29?

Or NDA date is a insider information only?

First non anonymous post.

I created an account mostly to combat the non first page comments bug. It did work. Whenever I try later a page with X comments, it continue to show X comments. Even if first page show more. Do you have seen that?

I'm here for a while. I have already created a homebrew game for the MSX computer in the 1990 (assembly language, I have some knowledge):
http://www.youtube.com/watch?v=qtCN6Urm1d4

Now I'm a "normal" Java programmer, but I have trying some projects. Hope one of them come to Xbox One...

Thanks for all the info.

Re: NDA confirmation date?

misterxmedia

2013-08-29 05:53 pm (UTC)

wellcome..i have some assembly knowledge too and tried to code demo scene :)

While specifics were not given out, SemiAccurate was told it was “significantly wider” along with beefed up buffers and deeper queues. Don’t discount this as a minor change, it is both critical to the system performance and a very complex thing to do. It will be interesting to see how Sony did their variant if they ever give a talk on the PS4 architecture.

That means the XBox One’s 8 Jaguar cores are clocked at ~1.9GHz, something that wasn’t announced at Hot Chips. Now you know.

http://semiaccurate.com/2013/08/29/a-deep-dive-into-microsofts-xbox-ones-architecture/

lol nice find^^ X1 CPU more powerful confirmed!!

@MrX this insider quote is interesting

mistercteam

2013-08-29 02:36 pm (UTC)

" I have seen it but not open. Back in june we saw inside the sdk and there was two apu's in the box. It looked like the wired photos. Very much like 360 design expect the second apu was close but on its own silicon. We were told that it would merged with the main soc w2w.."

term APU here is just for easy to grasp

Layer 2: Main SOC act as BIG CPU (same as MS Main SOC) + other thing
layer 1: 2 APU act as dual GPU on different silicon 2.5D (side by side)
*) layer 1, imagine Xbox GPU + eDRAM, only this time eDRAM and GPU same die size , put it side by side

for example is like this , it is 2.5D ( side by side) show identical complete die side by side
IBM 2.5D 2 secret CPU side by side 2.5D
http://eda360insider.files.wordpress.com/2012/05/ibm-3d-dram-core-mech-demo.gif



Edited at 2013-08-29 02:38 pm (UTC)

Re: @MrX this insider quote is interesting

mrzweistein

2013-08-29 02:42 pm (UTC)

Thats the first thing I thought, but insider said also that the dGPU was customized to do special things PC will even struggle with in the future. We will see. I think we already have a good understanding of what the XB1 will look like spec wise. IMO it is not that important what process will be used for the final silicon (2.5 or 3D) as long the performance will not negativly influenced by it.

Mr X. What has happened to the plan to start a full blown forum? Have you choosen the name?

Could there be Power8 tech in XboxOne

maxoo94

2013-08-29 02:56 pm (UTC)

http://www.comptoir-hardware.com/images/stories/_cpu/ibm_power8.jpg

we have the move engines, the Edram... etc etc...

Re: Could there be Power8 tech in XboxOne

misterxmedia

2013-08-29 05:51 pm (UTC)

Insiders hint on this...so sure we expect this one inside dGPU

dGPU in HSA is not the same as current dPUGs

Insider is trying to mislead us a bit :)

qo_lantern_op

2013-08-29 03:04 pm (UTC)

He said the power of the of X1 is

2.4tflop - 3.1tflops

times that number by 4 due to the custom shadercore architecture.

insider is trying not to speak too much :)

9.6tflop in pure local resources - and 12.4tflops with cloud.

Still very much the beast we been expected.

Re: Insider is trying to mislead us a bit :)

mrzweistein

2013-08-29 03:10 pm (UTC)

Man, you really have a very optimistic approach regarding the performance figures of the XB1. Nothing wrong with it but very very optimistic IMO. :)

"The CPUs connect to four 64b wide 2GB DDR3-2133 channels for a grand total of 68GB/sec bandwidth"

I think this is different from the Information about Ram ? 2133 instead of 1333 I read in many sites !

is it important or just a minor difference ?

it is 2133 in the 1st place,
and even today there is DDR3 3200 variant

Witcher 3 Dev: PS4 and Xbox One Versions Will "Probably" Differ, Depends on Individual Hardware

mrlookingbill

2013-08-29 04:40 pm (UTC)

Www.gamingbolt.com/witcher-3-dev-ps4-and-xbox-one-versions-will-probably-differ-depends-on-individual-hardware

Bungie says their will be graphical dufferences
http://www.ign.com/boards/threads/destinys-graphics-will-be-different-on-the-ps4-than-on-xbox-one-bungie-confirms.453119317/

This is the second developer that has come out and their would be visual and graphical differences between ps4 and xbox one.

Re: Witcher 3 Dev: PS4 and Xbox One Versions Will "Probably" Differ, Depends on Individual Hardware

mistercteam

2013-08-30 12:23 am (UTC)

Nice find, remember WD developer,
that said XB1 version will more dynamic thnx to the cloud
i think he made that comment just to be polite

real thing is even without cloud
the X1 will be better

http://www.eetimes.com/document.asp?doc_id=1257181

AMD has had silicon implementations of the cores working in its lab for at least a month. It has also sampled to at least one PC maker its Ontario processor, the first processor to use Bobcat cores, aimed at aimed at thin and light notebooks and netbooks.


Ontario, made in a 40nm TSMC process, will use two Bobcat cores, a Microsoft DirectX 11 graphics core and DDR3 memory. It is expected to ship in systems in the first half of next year.

Direct x 11 Graphics cores?

Re: DX 11 graphics core

mistercteam

2013-08-30 12:29 am (UTC)

that's very old, it is not even GCN 1.0
it is VLIW

the node process is 40nm

remember there is many AMD IP, sometimes they reuse old IP for some product

French site make news about one of the comment from Parasite76

kipotan

2013-08-29 04:55 pm (UTC)

http://www.xboxlive(.)fr/news_affiche_22430.html

It's about Rockstar Red Dead Revolver exclusivity on Xbox One and Sony pressure on Ubisoft.


But why now ? They know something inline with this ?

Edited at 2013-08-29 04:57 pm (UTC)

So we know from hot chips their are 15 co processors along with the APU
from insider he told us
1 for physics, 2 for geometry.

MS list 4 DSPs for audio(2 scaler, 1vector, and 1other)
vg leaks has 4 processrs for DATA move(1encode, 1decode, and insider say 2 swizzle)
adds up to 11.

the remaining three i'm thinking 2 video processers(1 kinect,1 video recording)

and the last one DGPU? and one DX11 core for Hardware Tiling of resources.

or 12 additional CU cores plus 1 physics, and 2 geometry?

Ps4 FCC

(Anonymous)

2013-08-29 05:39 pm (UTC)

Love the blog! Just a quick question if the ps4 has already passed thru FCC doesn't that mean ther system is now final hardware? Surely they couldn't upgrade by the 29rh of nov?

No it's impossible or else they need to report launch for at least 6 month after November.

Page 1 of 2
<<[1] [2] >>

You are viewing misterxmedia