Revision as of 15:10, 20 May 2013 editEBusiness (talk | contribs)136 edits →Products← Previous edit |
Revision as of 15:27, 20 May 2013 edit undoEBusiness (talk | contribs)136 edits Big round of deleting, no sources have reliably linked the GF 700 series and the GK110 chip, so it is gone.Next edit → |
Line 3: |
Line 3: |
|
| name = GeForce 700 Series |
|
| name = GeForce 700 Series |
|
| image = ] |
|
| image = ] |
|
| codename = GK110, GK114, GK116, GK117 |
|
| codename = |
|
| created = |
|
| created = |
|
| model = GeForce Series |
|
| model = GeForce Series |
|
| model1 = GeForce GT Series |
|
| model1 = GeForce GT Series |
|
| model2 = GeForce GTX Series |
|
| model2 = GeForce GTX Series |
|
| transistors = 292M 40 nm (GF119) |
|
|
| transistors1 = 585M 28 nm (GF117) |
|
| transistors1 = 585M 28 nm (GF117) |
|
| transistors2 = 1,270M 28 nm (GK107) |
|
| transistors2 = 1,270M 28 nm (GK107) |
|
| transistors3 = 1,270M 28 nm (GK208) |
|
| transistors3 = 1,270M 28 nm (GK208) |
|
|
|
|
| transistors4 = |
|
|
| transistors5 = 2,540M 28 nm (GK106) |
|
|
| transistors6 = 3,540M 28 nm (GK104) |
|
|
| transistors7 = |
|
|
| transistors8 = |
|
|
| transistors9 = 7,080M 28 nm (GK110) |
|
|
| arch = ] |
|
| arch = ] |
|
| entry = |
|
| entry = |
Line 29: |
Line 23: |
|
}} |
|
}} |
|
|
|
|
|
The '''GeForce 700 Series''' will be a family of ]s developed by ], used in desktop and laptop PCs. It will serve as the introduction for the Kepler Refresh architecture (GK-codenamed chips), named after the German mathematician, astronomer, and astrologer ]. A number of GeForce 700 series chips were released for mobile devices in April 2013. |
|
The '''GeForce 700 Series''' will be a family of ]s developed by ], to be used in desktop and laptop PCs. It will serve as the introduction for the Kepler Refresh architecture (GK-codenamed chips), named after the German mathematician, astronomer, and astrologer ]. A number of GeForce 700 series chips were released for mobile devices in April 2013. No desktop graphics cards have been released yet |
|
|
|
|
== Overview == |
|
|
With GK110, Nvidia focuses on compute performance. With 7.1 billion transistors it is the biggest GPU in terms of transistor count, dwarfing the GK104 and GF110. GK110 is unrivaled from a fabrication and power consumption standpoint, but the end result is that the performance per watt is unmatched due to the fact that so many tasks (graphical and compute) are massively parallel and map well to the large arrays of streaming processors found in GK110. |
|
|
|
|
|
With GK110, increase in space and bandwidth for both the register file and the L2 cache are seen. At the SMX level, GK110 register file space has increased to 256KB composed of 65K 32bit registers, as compared to Fermi. As for the L2 cache, GK110 L2 cache space increased by up to 1.5MB, twice as big as GF110. Both the L2 cache and register file bandwidth have also doubled. |
|
|
Performance in register-starved scenarios is also improved as there are more registers available to each thread. This goes in hand with an increase of total number of registers each thread can address, moving from 63 registers per thread to 255 registers per thread with GK110. |
|
|
|
|
|
With GK110, Nvidia also reworked the GPU texture cache to be used for compute. With 48KB in size, in compute the texture cache becomes a read-only cache, specializing in unaligned memory access workloads. Furthermore error detection capabilities have been added to make it safer for use with workloads that rely on ECC.<ref name="anandtech-GK110-preview">{{cite web | url=http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last/3| title=NVIDIA Launches Tesla K20 & K20X: GK110 Arrives At Last | date=11/12/2012 | publisher=AnandTech}}</ref> |
|
|
|
|
|
== Features == |
|
|
The GeForce 700 Series contains features from both GK104 and GK110. Kepler based members of the 700 series add the following standard features to the GeForce family. |
|
|
|
|
|
Derive from GK104 : |
|
|
|
|
|
* ] interface |
|
|
|
|
|
* ] 1.2 |
|
|
* ] 1.4a 4K x 2K video output |
|
|
* ] hardware video acceleration (up to 4K x 2K H.264 decode) |
|
|
* Hardware H.264 encoding acceleration block (NVENC) |
|
|
* Support for up to 4 independent 2D displays, or 3 stereoscopic/3D displays (NV Surround) |
|
|
* Bindless Textures |
|
|
* GPU Boost |
|
|
* TXAA |
|
|
* Manufactured by ] on a 28 nm process |
|
|
|
|
|
New Features from GK110 : |
|
|
|
|
|
* Compute Focus SMX Improvement |
|
|
* ] Compute Capability 3.5 |
|
|
* New Shuffle Instructions |
|
|
* Dynamic Parallelism |
|
|
* Hyper-Q (Hyper-Q's MPI functionality reserve for Tesla only) |
|
|
* Grid Management Unit |
|
|
* NVIDIA GPUDirect (GPU Direct’s RDMA functionality reserve for Tesla only) |
|
|
|
|
|
=== Compute Focus SMX Improvement === |
|
|
|
|
|
With GK110, Nvidia opted to increase compute performance. The single biggest change from GK104 is that rather than 8 dedicated FP64 CUDA cores, GK110 has up to 64, giving it 8x the FP64 throughput of a GK104 SMX. The SMX also sees an increase in space for register file. Register file space has increased to 256KB compared to Fermi. The texture cache are also improved. With a 48KB space, the texture cache can become a read-only cache for compute workloads.<ref name=anandtech-GK110-preview /> |
|
|
|
|
|
=== New Shuffle Instructions === |
|
|
At a low level, GK110 sees an additional instructions and operations to further improve performance. New shuffle instructions allow for threads within a warp to share data without going back to memory, making the process much quicker than the previous load/share/store method. Atomic operations are also overhauled, speeding up the execution speed of atomic operations and adding some FP64 operations that were previously only available for FP32 data.<ref name=anandtech-GK110-preview /> |
|
|
|
|
|
=== Hyper-Q === |
|
|
Hyper-Q expands GK110 hardware work queues from 1 to 32. The significance of this being that having a single work queue meant that Fermi could be under occupied at times as there wasn’t enough work in that queue to fill every SM. By having 32 work queues, GK110 can in many scenarios, achieve higher utilization by being able to put different task streams on what would otherwise be an idle SMX. The simple nature of Hyper-Q is further reinforced by the fact that it’s easily map to MPI, a common message passing interface frequently used in HPC. As legacy MPI-based algorithms that were originally designed for multi-CPU systems that became bottlenecked by false dependencies now have a solution. By increasing the number of MPI jobs, it’s possible to utilize Hyper-Q on these algorithms to improve the efficiency all without changing the code itself.<ref name=anandtech-GK110-preview /> |
|
|
|
|
|
=== Dynamic Parallelism === |
|
|
Dynamic Parallelism ability is for kernels to be able to dispatch other kernels. With Fermi, only the CPU could dispatch a kernel, which incurs a certain amount of overhead by having to communicate back to the CPU. By giving kernels the ability to dispatch their own child kernels, GK110 can both save time by not having to go back to the CPU, and in the process free up the CPU to work on other tasks.<ref name=anandtech-GK110-preview /> |
|
|
|
|
|
=== Grid Management Unit === |
|
|
Enabling Dynamic Parallelism requires a new grid management and dispatch control system. The new Grid Management Unit (GMU) manages and prioritizes grids to be executed. The GMU can pause the dispatch of new grids and queue pending and suspended grids until they are ready to execute, providing the flexibility to enable powerful runtimes, such as Dynamic Parallelism. |
|
|
The CUDA Work Distributor in Kepler holds grids that are ready to dispatch, and is able to dispatch 32 active grids, which is double the capacity of the Fermi CWD. The Kepler CWD communicates with the GMU via a bidirectional link that allows the GMU to pause the dispatch of new grids and to hold pending and suspended grids until needed. The GMU also has a direct connection to the Kepler SMX units to permit grids that launch additional work on the GPU via Dynamic Parallelism to send the new work back to GMU to be prioritized and dispatched. If the kernel that dispatched the additional workload pauses, the GMU will hold it inactive until the dependent work has completed. <ref>{{cite web | url= http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf | title= NVIDIA-Kepler-GK110-Architecture-Whitepaper|}}</ref> |
|
|
|
|
|
=== NVIDIA GPUDirect === |
|
|
NVIDIA GPUDirect™ is a capability that enables GPUs within a |
|
|
single computer, or GPUs in different servers located across a network, to directly exchange |
|
|
data without needing to go to CPU/system memory. The RDMA feature in GPUDirect allows |
|
|
third party devices such as SSDs, NICs, and IB adapters to directly access memory on multiple |
|
|
GPUs within the same system, significantly decreasing the latency of MPI send and receive |
|
|
messages to/from GPU memory. It also reduces demands on system memory bandwidth and |
|
|
frees the GPU DMA engines for use by other CUDA tasks. Kepler GK110 also supports other |
|
|
GPUDirect features including Peer‐to‐Peer and GPUDirect for Video. |
|
|
|
|
|
|
==Products== |
|
==Products== |
Line 308: |
Line 240: |
|
* |
|
* |
|
* |
|
* |
|
* |
|
|
* |
|
|
* |
|
|
{{refend}} |
|
{{refend}} |
|
|
|
|
Some implementations may use different specifications.