Gddr5X samsung: GDDR5 | DRAM | Specs & Features

GDDR5 | DRAM | Specs & Features

We use cookies to improve you experience on our website and to show you experience on our website and to show you relevant advertising, manage your settings for our cookies below.

Essential Cookies

These cookies are essential as they enable you to move around the website. This category cannot be disabled.

Company Domain
Google semiconductor.samsung.com, image.semiconductor.samsung.com, smetrics.samsung.com

Analytical or performance cookies

These cookies collect information about how you use our website, for example which pages you visit most often. All information these cookies collect is used to improve how the website works.

Analytical or performance cookies
Cookie Domain
Google ajax. googleapis.com, apis.google.com, calendar.google.com, developers.google.com, docs.google.com, google.com, maps.googleapis.com, spreadsheets.google.com, www.google.com, www.google.ie
Google www.google-analytics.com, www.googletagmanager.com, www.gstatic.com
Adobe assets.adobedtm.com

Functionality Cookies

These cookies allow our website to remember choices you make (such as your user name, language or the region you are in) and tailor the website to provide enhanced features and content for you.

Functionality Cookies
Cookie Domain Purpose
Akamai 176-34-86-175_s-23-203-249-81_ts-1604430438-clienttons-s.akamaihd.net, 176-34-86-175_s-23-203-249-81_ts-1604432488-clienttons-s. akamaihd.net, 176-34-86-175_s-23-203-249-90_ts-1604428164-clienttons-s.akamaihd.net, 176-34-86-175_s-95-101-143-18_ts-1604428258-clienttons-s.akamaihd.net, 176-34-86-175_s-95-101-143-24_ts-1604428321-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-81_ts-1604425495-clienttons-s.akamaihd.net,
34-242-207-243_s-23-203-249-81_ts-1604425563-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-81_ts-1604425669-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-81_ts-1604427540-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-81_ts-1604427617-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-81_ts-1604427664-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-81_ts-1604427922-clienttons-s.akamaihd.net,
34-242-207-243_s-23-203-249-81_ts-1604439090-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-81_ts-1604439174-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-81_ts-1604441206-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-81_ts-1604441267-clienttons-s. akamaihd.net, 34-242-207-243_s-23-203-249-90_ts-1604425484-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-90_ts-1604425610-clienttons-s.akamaihd.net,
34-242-207-243_s-23-203-249-90_ts-1604427737-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-90_ts-1604427797-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-90_ts-1604438922-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-90_ts-1604438968-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-90_ts-1604439033-clienttons-s.akamaihd.net, 34-242-207-243_s-23-203-249-90_ts-1604441023-clienttons-s.akamaihd.net,
34-242-207-243_s-95-101-129-82_ts-1604425732-clienttons-s.akamaihd.net, 34-245-202-11_s-23-203-249-81_ts-1604425513-clienttons-s.akamaihd.net, 34-245-202-11_s-23-203-249-81_ts-1604427569-clienttons-s.akamaihd.net, 34-245-202-11_s-23-203-249-90_ts-1604425365-clienttons-s.akamaihd.net, 34-246-182-217_s-23-203-249-81_ts-1604424915-clienttons-s.akamaihd.net, 34-246-182-217_s-23-203-249-81_ts-1604425000-clienttons-s. akamaihd.net,
34-246-182-217_s-23-203-249-81_ts-1604425155-clienttons-s.akamaihd.net, 34-246-182-217_s-23-203-249-81_ts-1604425567-clienttons-s.akamaihd.net, 34-246-182-217_s-23-203-249-81_ts-1604427446-clienttons-s.akamaihd.net, 34-246-182-217_s-23-203-249-81_ts-1604429495-clienttons-s.akamaihd.net, 34-246-182-217_s-23-203-249-90_ts-1604424817-clienttons-s.akamaihd.net, 34-246-182-217_s-23-203-249-90_ts-1604424939-clienttons-s.akamaihd.net,
34-246-182-217_s-23-203-249-90_ts-1604427359-clienttons-s.akamaihd.net, 34-246-182-217_s-23-203-249-90_ts-1604429563-clienttons-s.akamaihd.net, 34-246-182-217_s-95-101-129-82_ts-1604425062-clienttons-s.akamaihd.net, 34-246-182-217_s-95-101-143-18_ts-1604429398-clienttons-s.akamaihd.net, 34-246-182-217_s-95-101-143-24_ts-1604429274-clienttons-s.akamaihd.net, 34-246-182-217_s-95-101-143-24_ts-1604429365-clienttons-s.akamaihd.net,
34-246-182-217_s-95-101-143-24_ts-1604429616-clienttons-s.akamaihd.net, 364bf52c.akstat.io, 364bf5fa. akstat.io, 364bf6cc.akstat.io, 36c3fef2.akstat.io, 54-154-186-178_s-23-203-249-81_ts-1604425586-clienttons-s.akamaihd.net, 54-154-186-178_s-23-203-249-81_ts-1604429882-clienttons-s.akamaihd.net, 54-154-186-178_s-23-203-249-90_ts-1604425341-clienttons-s.akamaihd.net, 54-154-186-178_s-23-203-249-90_ts-1604425577-clienttons-s.akamaihd.net,
54-154-186-178_s-23-203-249-90_ts-1604425679-clienttons-s.akamaihd.net, 54-154-186-178_s-23-203-249-90_ts-1604427498-clienttons-s.akamaihd.net, 54-154-186-178_s-23-203-249-90_ts-1604431774-clienttons-s.akamaihd.net, 54-154-186-178_s-92-123-142-66_ts-1604427735-clienttons-s.akamaihd.net, 54-246-30-86_s-23-203-249-81_ts-1604425115-clienttons-s.akamaihd.net, 54-246-30-86_s-23-203-249-81_ts-1604427273-clienttons-s.akamaihd.net,
54-246-30-86_s-23-203-249-81_ts-1604427303-clienttons-s.akamaihd.net, 54-246-30-86_s-23-203-249-81_ts-1604427359-clienttons-s.akamaihd.net, 54-246-30-86_s-23-203-249-81_ts-1604431429-clienttons-s.akamaihd.net, 54-246-30-86_s-23-203-249-81_ts-1604431547-clienttons-s. akamaihd.net, 54-246-30-86_s-23-203-249-81_ts-1604435637-clienttons-s.akamaihd.net, 54-246-30-86_s-23-203-249-90_ts-1604427151-clienttons-s.akamaihd.net,
54-246-30-86_s-23-203-249-90_ts-1604429503-clienttons-s.akamaihd.net, 54-246-30-86_s-23-203-249-90_ts-1604429594-clienttons-s.akamaihd.net, 54-246-30-86_s-23-203-249-90_ts-1604433473-clienttons-s.akamaihd.net, 54-246-30-86_s-23-203-249-90_ts-1604433539-clienttons-s.akamaihd.net, 54-246-30-86_s-88-221-134-224_ts-1604435698-clienttons-s.akamaihd.net, 54-246-30-86_s-95-101-129-96_ts-1604424926-clienttons-s.akamaihd.net,
54-246-30-86_s-95-101-129-96_ts-1604424989-clienttons-s.akamaihd.net, 54-75-39-103_s-23-203-249-81_ts-1604425265-clienttons-s.akamaihd.net, 54-75-39-103_s-23-203-249-81_ts-1604425415-clienttons-s.akamaihd.net, 54-75-39-103_s-23-203-249-90_ts-1604425504-clienttons-s.akamaihd.net, 54-75-39-103_s-95-101-143-24_ts-1604432234-clienttons-s.akamaihd.net, 54-75-41-190_s-23-203-249-81_ts-1604424935-clienttons-s. akamaihd.net,
54-75-41-190_s-23-203-249-81_ts-1604425058-clienttons-s.akamaihd.net, 54-75-41-190_s-23-203-249-81_ts-1604425120-clienttons-s.akamaihd.net, 54-75-41-190_s-23-203-249-81_ts-1604425189-clienttons-s.akamaihd.net, 54-75-41-190_s-23-203-249-81_ts-1604427540-clienttons-s.akamaihd.net, 54-75-41-190_s-23-203-249-90_ts-1604424875-clienttons-s.akamaihd.net, 54-75-41-190_s-23-203-249-90_ts-1604425270-clienttons-s.akamaihd.net,
54-75-41-190_s-23-203-249-90_ts-1604427110-clienttons-s.akamaihd.net, 54-75-41-190_s-23-203-249-90_ts-1604429433-clienttons-s.akamaihd.net, 54-75-41-190_s-23-203-249-90_ts-1604429456-clienttons-s.akamaihd.net, 54-75-41-190_s-92-123-140-11_ts-1604427291-clienttons-s.akamaihd.net, 54-75-41-190_s-92-123-140-11_ts-1604427412-clienttons-s.akamaihd.net, 54-75-41-190_s-95-101-129-96_ts-1604425019-clienttons-s.akamaihd.net,
54-75-41-190_s-95-101-143-18_ts-1604429529-clienttons-s.akamaihd.net, 684dd305.akstat.io, 684dd306.akstat.io, 684dd307. akstat.io, 684dd308.akstat.io, 684dd309.akstat.io, 684dd30a.akstat.io, 684dd30c.akstat.io, 684dd30d.akstat.io, 6852bd07.akstat.io, 6852bd08.akstat.io, 6852bd09.akstat.io, 6852bd0a.akstat.io, 6852bd0b.akstat.io, 6852bd0c.akstat.io, 6852bd0d.akstat.io, 6852bd0e.akstat.io, 6852bd0f.akstat.io, 6852bd10.akstat.io, 6852bd11.akstat.io, 6852bd12.akstat.io,
6852bd13.akstat.io, 6852bd14.akstat.io, 685d5b18.akstat.io, 685d5b19.akstat.io, 685d5b1b.akstat.io, 686eb51b.akstat.io, 686eb704.akstat.io, bcsecure01-a.akamaihd.net, brightcove04pmdo-a.akamaihd.net, ds-aksb-a.akamaihd.net, el24ucyccuqvax5bs2kq-pblhb6-a723eeea5-clientnsv4-s.akamaihd.net, el24ucyccuqvax5bt4yq-ptbmxa-6ef8e4803-clientnsv4-s.akamaihd.net, el24ucyccuqwcx5bs4uq-p03zy7-676237e5e-clientnsv4-s.akamaihd.net,
el3lnwiccuqvax5bstvq-pch0tk-1cdf76638-clientnsv4-s.akamaihd.net, el3lnwiccuqvax5buy2q-pqnfkn-f673b4feb-clientnsv4-s.akamaihd.net, el3lnwiccuqvax5buzkq-pl30i3-08d7d87df-clientnsv4-s.akamaihd.net, el3lnwiccuqwcx5bu4ya-pyg66y-cb19a994e-clientnsv4-s. akamaihd.net, el3lnwiccuqxax5bstjq-puyi2b-1f022524f-clientnsv4-s.akamaihd.net, el3lnwiccuqxax5bsuua-pioden-695058c8f-clientnsv4-s.akamaihd.net, el3lnwiccuqxax5bsvta-pqns0s-b6979dbf5-clientnsv4-s.akamaihd.net,
el3lnwiccuqxax5btzpq-pbifp1-07760bdf0-clientnsv4-s.akamaihd.net, el3lnwiccuqxax5bu23q-p2ez1a-7d289db29-clientnsv4-s.akamaihd.net, el3lnwixzp4swx5bs5pq-pnfw20-03cb87b70-clientnsv4-s.akamaihd.net, el3lnwixzp4swx5bsryq-p52tb9-f3dab0dd0-clientnsv4-s.akamaihd.net, el3lnwixzp4swx5bu35q-pdannf-fd1139023-clientnsv4-s.akamaihd.net, el3lnwixzp4swx5buxna-pyccr1-f710a073b-clientnsv4-s.akamaihd.net, el3lnwky3wdkax5bsxbq-p3hn9l-a2a7437e4-clientnsv4-s.akamaihd.net,
el3lnwky3wdkex5bt23a-pfcryk-8b7c1430e-clientnsv4-s.akamaihd.net, elzm742y3wdkex5bs4lq-p0p40d-3a2e745b5-clientnsv4-s.akamaihd.net, elzm742y3wdkex5bzofa-pqb527-96b6b1fc9-clientnsv4-s.akamaihd.net, elzm74yccuqvax5b2szq-pf5z0b-8e0fe713e-clientnsv4-s.akamaihd.net, elzm74yccuqvax5bs5nq-pt4puj-60e29ce0a-clientnsv4-s.akamaihd. net, elzm74yccuqvax5bzo4a-ptxi68-223a872ab-clientnsv4-s.akamaihd.net, elzm74yccuqwcx5b2r3a-p84t0a-b5b6d0cb9-clientnsv4-s.akamaihd.net,
elzm74yccuqwcx5btaca-p2p13t-2edd5f4d6-clientnsv4-s.akamaihd.net, elzm74yccuqwcx5buakq-p7s1ie-7095e2510-clientnsv4-s.akamaihd.net, elzm74yccuqxax5b2o7q-partxm-0ba99e22d-clientnsv4-s.akamaihd.net, elzm74yccuqxax5bs6fa-pnivpg-c492934bb-clientnsv4-s.akamaihd.net, elzm74yccuqxax5bt5qq-pcrjf9-bdc24fa26-clientnsv4-s.akamaihd.net, elzm74yccuqxax5bzp4q-pkl6rx-fb475a90e-clientnsv4-s.akamaihd.net, elzm74yxzp4swx5bs4ga-p9xzbs-ed47165ae-clientnsv4-s.akamaihd.net,
elzm74yxzp4swx5bs7cq-p4s4el-cd1a19887-clientnsv4-s.akamaihd.net, elzm74yxzp4swx5bt4ka-p0qvim-2e8a5e71e-clientnsv4-s.akamaihd.net, elzm74yxzp4swx5bt6hq-pzy1yp-35d9d01e0-clientnsv4-s.akamaihd.net, elzm74yxzp4swx5bt7mq-p1duy0-1060998fa-clientnsv4-s.akamaihd.net, elzm74yxzp4swx5bucja-p0twy9-19851792c-clientnsv4-s.akamaihd.net, elzm74yxzp4swx5bzqza-pn76ir-1c0c55ff7-clientnsv4-s.akamaihd.net, elzm74yxzp4swx5bzsda-pqodge-888ec876f-clientnsv4-s. akamaihd.net,
g2nlvmqcchiscx5bva5a-pwotro-14b66ca5a-clientnsv4-s.akamaihd.net, g2nlvmqccuqvax5bs7hq-p4vzcl-ad59a5fd9-clientnsv4-s.akamaihd.net, g2nlvmqccuqxax5bsz6q-pm3a6a-3feb7d021-clientnsv4-s.akamaihd.net, g2nlvmqxzp4swx5bs5uq-pd12b9-62c8cb38d-clientnsv4-s.akamaihd.net, g2nlvmqxzp4swx5bt3va-p7puv0-d4fafcfea-clientnsv4-s.akamaihd.net, g2nlvmsy3wdkax5bs5zq-p675cj-d0b1fd299-clientnsv4-s.akamaihd.net, g33b4vqccuqvax5btwhq-pfp8ei-5c0ea4329-clientnsv4-s.akamaihd.net,
g33b4vqccuqvax5btytq-pupet4-0083df35c-clientnsv4-s.akamaihd.net, g33b4vqccuqvax5btzpq-pr1f2f-01d5fb765-clientnsv4-s.akamaihd.net, g33b4vqccuqvax5bvzcq-phk9tj-828709858-clientnsv4-s.akamaihd.net, g33b4vqccuqvax5bw2bq-py1x2v-a7310f6e5-clientnsv4-s.akamaihd.net, g33b4vqccuqvax5bx3za-pge3ox-a91a32353-clientnsv4-s.akamaihd.net, g33b4vqccuqwcx5bu4na-pqdvvi-3aaa5c611-clientnsv4-s.akamaihd.net, g33b4vqccuqwcx5bwzaq-pvw5k6-d3e3dcd05-clientnsv4-s.akamaihd.net,
g33b4vqccuqwcx5bx22q-p8kovq-e038e0c0c-clientnsv4-s. akamaihd.net, g33b4vqccuqxax5bstpa-p4rsfx-bd0382a30-clientnsv4-s.akamaihd.net, g33b4vqccuqxax5btyeq-poz8cc-9955b8a36-clientnsv4-s.akamaihd.net, g33b4vqxzp4swx5bu27q-pxv1vf-89db7a111-clientnsv4-s.akamaihd.net, g33b4vqxzp4swx5bv25q-pt8447-731cc407d-clientnsv4-s.akamaihd.net, g33b4vsy3wdkax5bswnq-plqmrf-ff7289811-clientnsv4-s.akamaihd.net, g33b4vsy3wdkex5bsuoq-p56ka1-9bf23f300-clientnsv4-s.akamaihd.net,
gzfsozyccuqwcx5bs3dq-p2yzo8-69eb1f4d7-clientnsv4-s.akamaihd.net, gzfsozyccuqwcx5bs4qa-p299q7-a9521f4ee-clientnsv4-s.akamaihd.net, gzfsozyccuqwcx5bsyyq-pv69oz-aed1b09c6-clientnsv4-s.akamaihd.net, gzfsozyccuqwcx5bwfuq-pw4gfb-c2c42381f-clientnsv4-s.akamaihd.net, gzfstpqccuqvax5bssvq-p0x8hm-7a3d7367f-clientnsv4-s.akamaihd.net, gzfstpqccuqvax5bsxsq-p2oajs-b2e67f00b-clientnsv4-s.akamaihd.net, gzfstpqccuqvax5bsy3a-pfuzjd-60f8ba5de-clientnsv4-s.akamaihd.net,
gzfstpqccuqvax5bu2ia-p6uwyn-30e7a92df-clientnsv4-s.akamaihd.net, gzfstpqccuqwcx5bswqa-pplxq4-ee58ceb89-clientnsv4-s.akamaihd. net, gzfstpqccuqwcx5bu3mq-p6qff7-f4c4075e7-clientnsv4-s.akamaihd.net, gzfstpqccuqwcx5buz4q-pbk4m8-d20c90e54-clientnsv4-s.akamaihd.net, gzfstpqccuqxax5bt4ka-p3fi1s-1fcad7cd5-clientnsv4-s.akamaihd.net, gzfstpqxzp4swx5bsttq-p683qt-2c3f6e21e-clientnsv4-s.akamaihd.net, gzfstpqxzp4swx5bsu5q-pyioyl-3b5424f35-clientnsv4-s.akamaihd.net,
gzfstpqxzp4swx5bsvra-ps8whv-800c4ca06-clientnsv4-s.akamaihd.net, gzfstpqxzp4swx5bt2ka-p3owfu-9bef421db-clientnsv4-s.akamaihd.net, gzfstpqxzp4swx5btynq-p80cg4-5fbda6ae3-clientnsv4-s.akamaihd.net, gzfstpsy3wdkax5btvta-pc4hb3-c24fbde0b-clientnsv4-s.akamaihd.net, i03f9f400-ds-aksb-a.akamaihd.net, i03fa4400-ds-aksb-a.akamaihd.net, i03faac00-ds-aksb-a.akamaihd.net, i03fae300-ds-aksb-a.akamaihd.net, i03fb4f00-ds-aksb-a.akamaihd.net, i22f29600-ds-aksb-a.akamaihd.net,
i22f44600-ds-aksb-a.akamaihd.net, i22f47c00-ds-aksb-a.akamaihd.net, i22f4a100-ds-aksb-a.akamaihd.net, i22f55c00-ds-aksb-a.akamaihd.net, i22f5ca00-ds-aksb-a.akamaihd.net, i22f6b100-ds-aksb-a. akamaihd.net, i22f7bf00-ds-aksb-a.akamaihd.net, i22fdb800-ds-aksb-a.akamaihd.net, i22fdd700-ds-aksb-a.akamaihd.net, i3430c200-ds-aksb-a.akamaihd.net, i34d04400-ds-aksb-a.akamaihd.net, i34d71700-ds-aksb-a.akamaihd.net, i36486200-ds-aksb-a.akamaihd.net, i364b2700-ds-aksb-a.akamaihd.net,
i369aba00-ds-aksb-a.akamaihd.net, i36d85900-ds-aksb-a.akamaihd.net, i36d86800-ds-aksb-a.akamaihd.net, i36e56b00-ds-aksb-a.akamaihd.net, i36f61e00-ds-aksb-a.akamaihd.net, i36f6c000-ds-aksb-a.akamaihd.net, i3f23f800-ds-aksb-a.akamaihd.net, ib0225600-ds-aksb-a.akamaihd.net, s.go-mpulse.net, trial-eum-clientnsv4-s.akamaihd.net, trial-eum-clienttons-s.akamaihd.net, warfnl2y3wdkex5buhra-pvnsej-42dd2535b-clientnsv4-s.akamaihd.net,
warfnlyccuqvax5bvjta-pivu9l-324052216-clientnsv4-s.akamaihd.net, warfnlyccuqxax5bwjua-pt5xj8-63e5f59c4-clientnsv4-s.akamaihd.net, warfnlyxzp4swx5bugca-p9ihiy-2a56daf9f-clientnsv4-s.akamaihd.net, warfnlyxzp4swx5buiqq-p5eemy-20706e9d7-clientnsv4-s.akamaihd.net To provide the optimized image quality and enhance page loading speed
To provide the optimized image quality and enhance page loading speed
Amazon
(Cloud Front)
Amazon (Cloud Front) d15mv1adrb1s6e. cloudfront.net, d1vp9jkpfdwr15.cloudfront.net, d25jv1xpupcva6.cloudfront.net, d2cmqkwo8rxlr9.cloudfront.net, d2m3ikv8mpgiy8.cloudfront.net, d334tbn9icrqnt.cloudfront.net, d38nbbai6u794i.cloudfront.net, d3dvvd5arbl3b4.cloudfront.net, d3nkfb7815bs43.cloudfront.net, d9qz450atvita.cloudfront.net To speed up the delivery of your static content (e.g., images, style sheets, JavaScript, etc.) to viewers across the globe To speed up the delivery of your static content (e.g., images, style sheets, JavaScript, etc.) to viewers across the globe
Brightcove admin.brightcove.com, metrics.brightcove.com, players.brightcove.net, sadmin.brightcove.com, vjs.zencdn.net To support video streaming

Advertising Cookies

These cookies gather information about your browser habits. They remember that you’ve visited our website and share this information with other organizations such as advertisers.

advertising cookies
Cookie Domain
Facebook atdmt.com, connect.facebook.net, cx.atdmt.com, facebook.com, www.facebook.com
Google Advertising ad.doubleclick.net, adservice.google.com, adservice.google.ie, cm.g.doubleclick.net, doubleclick.net, googleads.g.doubleclick.net, pubads.g.doubleclick.net, static.doubleclick.net, stats.g.doubleclick.net, www.googleadservices.com
Google s.ytimg.com, www.youtube.com, youtube.com
Linkedin ads.linkedin.com, linkedin.com, px.ads.linkedin.com, www.linkedin.com

New Graphics Memory up to 14 Gbps

In Q4 2015, JEDEC (a major semiconductor engineering trade organization that sets standards for dynamic random access memory, or DRAM) finalized the GDDR5X specification, with accompianing white papers. This is the memory specification which is expected to be used for next-generation graphics cards and other devices. The new technology is designed to improve bandwidth available to high-performance graphics processing units without fundamentally changing the memory architecture of graphics cards or memory technology itself, similar to other generations of GDDR, although these new specifications are arguably pushing the phyiscal limits of the technology and hardware in its current form.

The GDDR5X SGRAM (synchronous graphics random access memory) standard is based on the GDDR5 technology introduced in 2007 and first used in 2008. The GDDR5X standard brings three key improvements to the well-established GDDR5: it increases data-rates by up to a factor of two, it improves energy efficiency of high-end memory, and it defines new capacities of memory chips to enable denser memory configurations of add-in graphics boards or other devices. What is very important for developers of chips and makers of graphics cards is that the GDDR5X should not require drastic changes to designs of graphics cards, and the general feature-set of GDDR5 remains unchanged (and hence why it is not being called GDDR6).

Performance Improvements

Nowadays highly binned GDDR5 memory chips can operate at 7 Gbps to 8 Gbps data rates. While it is possible to increase performance of the GDDR5 interface for command, address and data in general, according to Micron Technology, one of the key designers of GDDR5X, there are limitations when it comes to array speed and command/address protocols. In a bid to improve performance of the GDDR5 memory, engineers had to change internal architecture of memory chips significantly.

The key improvement of the GDDR5X standard compared to the predecessor is its all-new 16n prefetch architecture, which enables up to 512 bit (64 Bytes) per array read or write access. By contrast, the GDDR5 technology features 8n prefetch architecture and can read or write up to 256 bit (32 Bytes) of data per cycle. Doubled prefetch and increased data transfer rates are expected to double effective memory bandwidth of GDDR5X sub-systems. However, actual performance of graphics cards will depend not just on DRAM architecture and frequencies, but also on memory controllers and applications. Therefore, we will need to test actual hardware to find out actual real-world benefits of the new memory.

Just like the predecessor, GDDR5X functions with two different clock types — a differential command clock (CK) to where address and command inputs are referenced, as well as a forwarded differential write clock (WCK) where read and write data are referenced to. WCK runs at a frequency that is two times higher than the CK. The data can be transmitted at double data rate (DDR) or quad data rate (QDR) relative to the differential write clock (WCK), depending whether 8n prefetch or 16n prefetch architecture and protocols are used. Accordingly, if makers of chips manage to increase CK clock to 1.5 GHz, then data rate in QDR/16n mode will rise to 12 Gbps.

Since the GDDR5X protocol and interface training sequence are similar to those of the GDDR5, it should be relatively easy for developers of chips to adjust their memory controllers to the new type of memory. However, since the QDR mode (which is called Ultra High Speed mode in Micron’s materials) mandates usage of PLLs/DLLs (Phase Locked Loops, Delay Locked Loops), there will be certain design changes to design of high-end memory chips.

JEDEC’s GDDR5X SGRAM announcement discusses data rates from 10 to 14 Gbps, but Micron believes that eventually they could be increased to 16 Gbps. It is hard to say whether commercial chips will actually hit such data rates, keeping in mind that there are new types of memory incoming. However, even a 256-bit GDDR5X memory sub-systems running at 14 Gbps could provide up to 448 GBps of memory bandwidth, just 12.5% lower compared to that of AMD’s Radeon R9 Fury X (which uses first-gen HBM).











GPU Memory Math
  AMD Radeon

R9-290X
NVIDIA GeForce

GTX 980 Ti
NVIDIA GeForce

GTX 960
AMD Radeon

R9 Fury X
Samsung’s 4-Stack HBM2 based on 8 Gb DRAM Theoretical GDDR5X 256-bit

sub-system
Theoretical GDDR5X 128-bit

sub-system
Total Capacity 4 GB 6 GB 2 GB 4 GB 16 GB 8 GB 4 GB
B/W Per Pin 5 Gb/s 7 Gb/s 7 Gb/s 1 Gb/s 2 Gb/s 14 Gb/s 14 Gb/s
Chip capacity 2 Gb 4 Gb 4 Gb 1 GB 4 GB 1 GB

(8 Gb)
1 GB

(8 Gb)
No. Chips/Stacks 16 12 4 4 4 8 4
B/W Per Chip/Stack 20

GB/s
28

GB/s
28

GB/s
128

GB/s
256

GB/s
56

GB/s
56

GB/s
Bus Width 512-bit 384-bit 128-bit 4096-bit 4096-bit 256-bit 128-bit
Total B/W 320

GB/s
336

GB/s
112

GB/s
512

GB/s
1

TB/s
448

GB/s
224

GB/s
Estimated DRAM

Power Consumption
30 W 31. 5 W 10 W 14.6 W n/a 20 W 10 W

Capacity Improvements

Performance was not the only thing that developers of the GDDR5X had to address. Many applications require not only high-performance memory, but a lot of high-performance memory. Increased capacities of GDDR5X chips will enable their adoption by broader sets of devices in addition to graphics/compute cards, game consoles and network equipment as well as other areas. Initially one would expect the high density configurations to be slightly conservative on frequency to begin with.

The GDDR5 standard covered memory chips with 512 Mb, 1 Gb, 2 Gb, 4 Gb and 8 Gb capacities. The GDDR5X standard defines devices with 4 Gb, 6 Gb, 8 Gb, 12 Gb and 16 Gb capacities. Typically, mainstream DRAM industry tends to double capacities of memory chips because of economic and technological reasons. However, with GDDR5X the industry decided to ratify SGRAM configurations with rather unusual capacities — 6Gb and 12Gb.

The mobile industry already uses LPDDR devices with 3 Gb, 6 Gb and 12 Gb capacities in a bid to maximize flexibility of memory configurations for portable electronics. As it appears, companies developing standards for graphics DRAM also wanted to capitalize on flexibility. A GDDR5X chip with 16 Gb capacity made using 20 nm or 16/18 nm process technology would have a rather large die size and thus high cost. However, the size and cost of a 12 Gb DRAM IC should be considerably lower and such a chip could arguably address broader market segments purely on cost.

Just like in case of the GDDR5, the GDDR5X standard fully supports clamshell mode, which allows two 32-bit memory chips to be driven by one 32-bit memory controller by sharing address and command bus while reducing the number of DRAM IC’s I/Os to 16. Such operation has no impact on system bandwidth, but allows doubling the amount of memory components per channel. For example, it should be theoretically possible to build a graphics card with 64 GB of GDDR5X using one GPU with a 512-bit memory bus as well as 32 16 Gb GDDR5X memory chips.

Unusual capacities will help GDDR5X to better address all market segments, including graphics cards, HPC (high performance computing), game consoles, network equipment and so on. However, it should be noted that the GDDR5X has extremely potent rival, the second-gen HBM, which offers a number of advantages, especially in the high-end segment of the graphics and HPC markets.

Energy Efficiency

Power consumption and heat dissipation are two major limiting factors of compute performance nowadays. When developing the GDDR5X standard, the industry implemented a number of ways to keep power consumption of the new graphics DRAM in check.

Supply voltage and I/O voltages of the GDDR5X were decreased from 1. 5V on today’s high-end GDDR5 memory devices to 1.35V. Reduction of Vdd and Vddq should help to cut power consumption of the new memory by up to 10%, which is important for high-performance and mobile devices where the memory can take a sizable chunk of the available power budget.

The reduction of supply and I/O voltages is not the only measure to cut power consumption of the new memory. The GDDR5X standard makes temperature sensor controlled refresh rate a compulsory feature of the technology, something that could help to optimize power consumption in certain scenarios. Moreover, there are a number of other features and commands, such as per-bank self refresh, hibernate self refresh, partial array self refresh and other, that were designed to shrink the energy consumption of the new SGRAM.

Due to lower voltages and a set of new features, power consumption of a GDDR5X chip should be lower compared to that of a GDDR5 chip at the same clock-rates. However, if we talk about target data rates of the GDDR5X, then power consumption of the new memory should be similar or slightly higher than that of GDDR5, according to Micron. The company says that GDDR5X’s power consumption is 2-2.5W per DRAM component and 10-30W per board. Even with similar/slightly higher power consumption compared to the GDDR5, the GDDR5X is being listed as considerably more energy efficient due to its improved theoretical performance.

We do not know specifications of next-generation graphics adapters (for desktops and laptops) from AMD and NVIDIA, but if developers of GPUs and DRAMs can actually hit 14 Gb/s data-rates with GDDR5X memory, they will double the bandwidth available to graphics processors vs GDDR5 without significantly increasing power consumption. Eventually, more efficient data-rates and unusual capacities of the GDDR5X could help to actually decrease power consumption of certain memory sub-systems.

Implementation

While internally a GDDR5X chip is different from a GDDR5 one, the transition of the industry to GDDR5X is a less radical step than the upcoming transition to the HBM (high-bandwidth memory) DRAM. Moreover, even the transition from the GDDR3/GDDR4 to the GDDR5 years ago was considerably harder than transition to the GDDR5X is going to be in the coming years.

The GDDR5X-compliant memory chips will come in 190-ball grid array packaging (as compared to 170-ball packaging used for current GDDR5), thus, they will not be pin-to-pin compatible with existing GDDR5 ICs or PCBs for modern graphics cards. But while the GDDR5X will require development of new PCBs and upgrades to memory controllers, everything else works exactly like in case of the GDDR5: the interface signal training features and sequences are the same, error detection is similar, protocols have a lot of resemblances, even existing GDDR5 low and high speed modes are supported to enable mainstream and low-power applications. BGA packages are inexpensive, and they do not need silicon interposers nor use die-stacking techniques which HBM requires.

Implementation of GDDR5X should not be too expensive both from R&D and production perspectives; at least, this is something that Micron implied several months ago when it revealed the first details about the technology.

Industry Support

The GDDR5X is a JEDEC standard supported by its members. The JEDEC’s document covering the technology contains vendor IDs for three major DRAM manufacturers: Micron, Samsung and SK Hynix. Identification of the memory producers are needed for controllers to to differentiate between various vendors and different devices, and listing the memory makers demonstrates that they participated in development, considered features and balloted on them at JEDEC’s meetings, which may indicate their interest in supporting the technology. Unfortunately, exact plans for each of the companies regarding GDDR5X production are unknown, though we would expect GDDR5X parts to fit between the current GDDR5 high end and anything implementing HBM, or for implementing higher memory capacity on lower end GPUs. Micron plans to start mass production of its GDDR5X memory chips in mid-2016, so we might see actual GDDR5X-based memory sub-systems in less than six months from now.

NVIDIA, currently the world’s largest supplier of discrete graphics processors, said that that as a member of JEDEC it participates in the development of industry standards like GDDR5X. AMD is also a member of JEDEC and it usually plays a key role in development of memory standards. Both of these companies also employ compression algorithms to allieviate the stress on texture transfers between the GPU and memory, and thus an increase in bandwidth (as shown by Fiji) plus an increase in density can see benefits in texture rich or memory bound compute scenarios.

While specific plans of various companies regarding the GDDR5X are unclear, the technology has a great potential if the numbers are accurate (it has to be, it’s a standard) and has all chances to be adopted by the industry. The main rival of the GDDR5X, second-generation HBM, can offer higher bandwidth, lower power consumption and smaller form-factors, but at the cost of design and manufacturing complexities. In fact, what remains to be seen is whether the HBM and the GDDR5X will actually compete directly against each other or will just become two complementary types of memory. Different applications nowadays have different requirements, and an HBM memory sub-system with 1 TBps of bandwidth makes a perfect sense for a high-end graphics adapter.  However mainstream video cards should work perfectly with GDDR5X, and chances are we will see both in play at different market focal points.

Samsung GDDR6 Memory Wins CES 2018 Innovation Award

3DNews Technologies and IT market. News RAM modules, memory cards, flash drives… Samsung GDDR6 memory wins CE…

The most interesting in the reviews


11/13/2017 [07:18],

Ivan Grudtsyn

The problem of limited bandwidth of GDDR5 graphics memory led to the appearance of video cards and HPC accelerators with buffer memory HBM (HBM1) and later — HBM2. The connection of the graphics core and High Bandwidth Memory crystals through an intermediate silicon layer made it possible to both increase the bandwidth of the memory subsystem and significantly reduce the area occupied by the key elements of the video card. At the same time, solutions with HBM/HBM2 turned out to have a lot of drawbacks: high cost and, as a result, limited memory capacity, the virtual absence of the possibility of replacing VRAM chips within one GPU generation (again, due to additional costs) and strong dependence on contractors. All this led to the parallel release of high-end video cards with GDDR5, GDDR5X buffer memory (hardly a full-fledged replacement for GDDR5) and HBM2.

In the first half of next year, a new generation of chips, GDDR6, will come to the aid of insufficiently fast GDDR5 memory and more expensive GDDR5X. The latter are already being produced by Samsung and SK Hynix, and mass deliveries should begin in the coming months. Nevertheless, GDDR6 will appear in serial video cards only in the spring — with the first announcements of NVIDIA GeForce and / or TITAN adapters based on 12-nm Volta chips. Previously, Samsung Electronics informed the public about its plans to release GDDR6 chips with a bandwidth of 14 to 16 Gb / s per contact (against a maximum of 9Gb/s for GDDR5 and 10-11.4 Gb/s for GDDR5X), and SK Hynix announced 8 Gb (1 GB) GDDR6 chips with a bandwidth limit of 16 Gb/s per pin.

The press release war between South Korean VRAM manufacturers continues these days. Among the mass of Samsung products that won the CES 2018 Innovation award, there was also a place for a 16-Gbps (2-GB) GDDR6 chip.

“Samsung 16 Gb GDDR6 is the fastest and most cost effective DRAM for next generation graphics products. It processes images and video at a throughput of 16 Gb/s per contact and a total memory bandwidth of 64 GB/s, which is equivalent to transferring twelve Full HD video DVDs per second. The new DRAM can operate at 1.35V, which provides an additional advantage over today’s graphics memory, which requires 1.5V at only 8Gbps per pin.»

As VideoCardz found out, the above description corresponds to a Samsung GDDR6 product marked K4ZAF325BM. Eight of these chips (128 Gb or 16 GB in total) will provide a total throughput of 512 GB / s with a 256-bit memory bus, and twelve chips with 768 GB / s with a 384-bit bus. For comparison, the HBM2 buffer memory of the Radeon RX Vega 64 video card (and a number of non-gaming adapters) has a total memory bandwidth of 484 GB / s, while the Tesla V100 HPC accelerator has 900 GB/s. Of course, the development of High Bandwidth Memory also does not stand still, but the conditional HBM3 will obviously be delayed relative to GDDR6.

Sources:


If you notice an error, select it with the mouse and press CTRL+ENTER.

Related materials

Permanent URL: https://3dnews.ru/961381

Headings:
News Hardware, RAM modules, memory cards, flash drives, card readers,

Tags:
samsung, sk hynix, gddr6, hbm2, ces 2018, award

← В
past
To the future →

GDDR6, GDDR5X and HBM2: comparing current video memory

In addition to a fast graphics processor for games, the amount of video memory and how fast it is is also important. Giant textures and information for calculations must quickly come to the disposal of computational elements, otherwise it will come to “traffic jams” and a decrease in the frame rate on the monitor screen. Particularly high resolutions (greater than 1080p) with computationally demanding anti-aliasing quickly take over the memory resources of mid-range graphics cards. Current graphics accelerators usually use GDDR5 chips, more expensive NVIDIA models use faster GDDR5X memory.

Meanwhile, AMD’s forthcoming Vega accelerators will use high-speed second-generation High-Bandwidth Memory (HBM2). So what are the main features of these types of memory?

Current graphics memory in comparison

GDDR5 GDDR5X GDDR6 HBM2

Baud rate

up to 8 Gbps

up to 12 Gbps

up to 16 Gbps

1.6 to 2 Gbps

Bus interface

512 bit

512 bit

512 bit

4.096 bit

Voltage

1. 5V

1.35V

1.35 V

1.2V

Memory chip density

Up to 8 Gbit

Up to 16 Gbit

Up to 8 Gbit

Up to 16 Gbit

Memory capacity

12 GB

12 GB

12 GB

16 GB

General speed

512Gbps

768 Gbps

1024 Gbps

1024 Gbps

The above table shows the main characteristics of the four types of memory we are considering. According to the specification, GDDR5X supports up to 12Gb/s throughput and 512-bit bus width. In the final product, this could mean a data transfer rate of 768 Gbps. However, in graphic cards such performance that one can only dream of is not achieved.

The fastest GDDR5X throughput on the market is 11Gbps. Although NVIDIA’s Titan XP offers 12 GB of video memory, it uses «only» a 384-bit instead of a 512-bit memory bus.

SK Hynix shows GDDR6 data sheet at GTC 2017. (Photo: Heise)

GDDR6 vs. HBM2

The GDDR6 data that SK Hynix unveiled at GTC 2017 is especially interesting to compare with that of the other new graphics memory of the year, HBM2. At best, both types of memory provide roughly the same bandwidth. Differences are noticeable in volume. AMD has announced a «not for gamer» card with 16 GB of memory that is preparing to enter the market at the end of June 2017. Since the configuration of GDDR6 chips must remain identical to the configuration of GDDR5, graphics accelerator manufacturers most likely will not be able to overcome the 12 GB threshold on a single card.

HBM2 will have a 16 GB limit for a while, but this standard has the opportunity to raise the bar higher — and in the truest sense of the word, since 3D memory is still used in the version only 4 GB per stack (“4-Hi Stack»), but has the ability to stack up to 8 GB.