Welcome Guest ( Log In | Register )

Bump Topic Topic Closed RSS Feed
127 Pages « < 47 48 49 50 51 > » Bottom

Outline · [ Standard ] · Linear+

 NVIDIA GeForce Community V16 (welcum pascal), ALL HAIL NEW PASCAL KING GTX1080 out now

views
     
Topet
post Jul 13 2016, 08:49 PM

schadenfreude
*****
Senior Member
994 posts

Joined: Jun 2009
From: Sabah


QUOTE(Someonesim @ Jul 13 2016, 08:44 PM)
Abit sad when I found out our 1070 Gamerock not PE one  biggrin.gif  PE OCed additional 1XX core and 2XX memory  bye.gif  but for the price I'm happy  laugh.gif
*
Unfortunately they didn't bring the PE version...as you said for the price. Who cares right, can't wait...luckily i don't buy rx480...

This post has been edited by Topet: Jul 13 2016, 08:50 PM
Someonesim
post Jul 13 2016, 10:20 PM

In my way
*******
Senior Member
9,132 posts

Joined: Aug 2005



QUOTE(Topet @ Jul 13 2016, 08:49 PM)
Unfortunately they didn't bring the PE version...as you said for the price. Who cares right, can't wait...luckily i don't buy rx480...
*
Hopefully can at least OC to PE's level, given that they comes with same cooler, should have certain level of OC capability. I only hopes the size could fit my tiny case.

If RX480 was nearer to US's pricing, I will consider it, but local pricing then big no no.
Topet
post Jul 13 2016, 10:36 PM

schadenfreude
*****
Senior Member
994 posts

Joined: Jun 2009
From: Sabah


QUOTE(Someonesim @ Jul 13 2016, 10:20 PM)
Hopefully can at least OC to PE's level, given that they comes with same cooler, should have certain level of OC capability. I only hopes the size could fit my tiny case.

If RX480 was nearer to US's pricing, I will consider it, but local pricing then big no no.
*
1.4k for rx480 and 1.8k for gtx1070...i don't think its really long just thicker at 2.5 slot it uses..
Moogle Stiltzkin
post Jul 13 2016, 11:11 PM

Look at all my stars!!
*******
Senior Member
4,475 posts

Joined: Jan 2003
anyone read this yet? sad.gif

QUOTE
id Software: Use TSSAA or No AA Or Else Async Compute Is Disabled

Tiago Sousa, Lead Renderer Programmer at id Software, wants to let everyone benchmarking DOOM know that they should use TSSAA or no AA or else Async Compute is disabled. He goes on to say that Async Compute for other AA modes will be addressed in a later update.



is TSSAA any good or rubbish ?
riderz135
post Jul 13 2016, 11:16 PM

Getting Started
**
Junior Member
150 posts

Joined: Jan 2011
user posted image

Zotac GTX1070 ( ZOTAC's BOARD REFERENCE EDITION ) spoted! whistling.gif whistling.gif


kevyeoh
post Jul 13 2016, 11:21 PM

Look at all my stars!!
*******
Senior Member
4,721 posts

Joined: Jan 2003


hi, may i know where to get 1070 @ RM1.8k? is it via gemfive? but that is limited only, already sold out...wondering if we can get it from a normal shop with such price...

thanks!

QUOTE(Topet @ Jul 13 2016, 10:36 PM)
1.4k for rx480 and 1.8k for gtx1070...i don't think its really long just thicker at 2.5 slot it uses..
*
adilz
post Jul 13 2016, 11:21 PM

Getting Started
**
Junior Member
267 posts

Joined: Oct 2007
From: Kuala Lumpur, Malaysia


[quote=svfn,Jul 13 2016, 05:08 PM]
Got these from the other thread, I am trying understand both statements.

Would appreciate some help.
*

[/quote]

i guess what they mean is nvidia only focus the latest game optimizations for latest gen like Pascal, so they less focused on optimizations for previous gen, whereas amd GCN architecture can improve some of the older gen cards performance via drivers so they are not so left out. some people still on 280x see nice performance boost in DOOM vulkan for example, for a rather old card it's still cheaper and better than GTX 960 in many games. cant say the same for GTX 770.

multitasking? i think this is referring to CPU instead?
*

[/quote]

I wouldn't say AMD improvement we've seen with the recent events (DX12 patch for RotTR or Vulkan Patch for DOOM) is down to what AMD is doing with its driver. Some say its because the game is optimized for the particular game, some say because of bad driver implementation. For me and I stand by this, its down to fact that finally the API shift AMD is banking on since they created GCN finally came. If we are to discuss about this, it hard to keep everyone on the same page without first fundamentally understanding where the key components in making the game display on your screen sits. And I'm going to go long on this one once and for all to clear the air.

user posted image

My simplistic explanation, application (game engine) talks to API layer (DirectX, Vulkan, OpenGL runtime). In order for the GPU hardware to talk to the API, the GPU vendor creates their specific driver. The API itself has so many functions and feature. But certain API function and feature must be implemented by the game developer to ensure that their game runs across multiple devices that supports the particular API (this is key as to why API like DirectX exist in the first place). And there are certain API function and features that's available for the game developer and GPU manufacturer (through their respective drive implementation) to use and implement, and there's where those specific GPU brand optimization game come about. The API is needed for today games to work (those who has played games in MS-DOS before the invention of DirectX, please raise your hand), there's no two ways about. If someone comes and tell me API is not needed, please uninstall all traces of DirectX runtime from your Windows PC and tell me you game works.

Now lets discuss first on this AMD/ GCN/ Async Compute thingy. AMD had GCN built into the GPUs starting in 2011, and at that time, the de facto API is DX11 for most games. Within that GCN, AMD had this hardware based Asynchronous Compute Engine that can utilize the asynchronous compute concept to improve game performance. But DX11 API does not make use of async compute. So there's AMD with their GCN sitting in their GPU, taking up transistor and die-space, but pretty much useless. But AMD believed that its a good technology, continue upgrading their GCN throughout DX11 reign, despite it basically not improving their card performance in any way. AMD was banking on that someday future API will make use of async compute. The created Mantle API to showcase the async compute capability who uses Mantle? But game developers know that Async Compute is good thing, Microsoft knows its good thing, Khronos knows it a good thing. Only now with DX12 and Vulkan latest development that Async Compute becomes a feature available in their API.

And that feature is NOT proprietary to AMD (although AMD has a hand in pushing it to be accepted as one the API features for DX12 and Vulkan). Its exist in DX12 and Vulkan API, and any game developer and GPU manufacturer (via their driver implementation) can take advantage of it if they want too. And I really mean ANYBODY. I would like to stress here too that DX12 is not all about Async Compute. Its just one of the available features of many others. But the fundamentals of DX12 existence is to move into low level API programming which is more efficient than DX11 API. Refer to pic below

Attached Image

In relation to the recent events with RotTR DX12 patch and Doom Vulkan patch, to say that its driver related is not quite right. AMD latest driver was already in use (and I'm sure that within that driver, using Async Compute already exist) before those 2 patches came out. But that Async Compute feature was not utilize, not until a few days ago when those 2 patches came out for DX12 and Vulkan, and finally those 2 game can make use of AMD async compute, and BOOM, improvement to some AMD cards with GCN. DX12 and Vulkan API as whole does not improve AMD cards performance. Its specific feature utilization within the respective API does (in this case API Async Compute support). So generalize that Dx12 or Vulkan will improve AMD cards performance is wrong. It totally depends on the game developer usage and implementation of DX12 and Vulkan API feature, but potential for AMD cards is there, if the game developers decides to do so. Its not so much within AMD controls, but its in the hands of the developer. And if any of you read through some those DirectX12 articles, one of the incentive for DX12 implementation is that game developer has more control on how their game engine interacts hardware level.

So this is my take on AMD/ DX12/ Vulkan. As owner of Nvidia's GTX 1080, I also have my opinion on Nvidia in relation to the new APIs and DX11. And that opinion is among the reason I sold of my GTX 970 just six months after buying it and went for GTX 1080 recently. Maybe I have my say on Nvidia some other time.

By the way, anyone who like techie stuff can watch this nice video which I think better explain things than my posting, the history and what to expect in future from this new APIs


And those who's interested about the RotTR DX12 analysis, can head to minute 12:25

svfn kianweic

This post has been edited by adilz: Jul 14 2016, 10:34 AM
Topet
post Jul 13 2016, 11:22 PM

schadenfreude
*****
Senior Member
994 posts

Joined: Jun 2009
From: Sabah


QUOTE(kevyeoh @ Jul 13 2016, 11:21 PM)
hi, may i know where to get 1070 @ RM1.8k? is it via gemfive? but that is limited only, already sold out...wondering if we can get it from a normal shop with such price...

thanks!
*
Only at gemfive with voucher. You can always monitor gemfive. Sometimes seller restock the card.
riderz135
post Jul 13 2016, 11:23 PM

Getting Started
**
Junior Member
150 posts

Joined: Jan 2011
QUOTE(kevyeoh @ Jul 13 2016, 11:21 PM)
hi, may i know where to get 1070 @ RM1.8k? is it via gemfive? but that is limited only, already sold out...wondering if we can get it from a normal shop with such price...

thanks!
*
Try checking again the next day bro.... usually restock fast.... several times visited ....ive seen the gc available to purchase....
svfn
post Jul 13 2016, 11:47 PM

On my way
****
Junior Member
500 posts

Joined: Oct 2015
From: Penang
QUOTE(adilz @ Jul 13 2016, 11:21 PM)
So this is my take on AMD/ DX12/ Vulkan. As owner of Nvidia's GTX 1080, I also have my opinion on Nvidia in relation to the new APIs and DX11. And that opinion is among the reason I sold of my GTX 970 just six months after buying it and went for GTX 1080 recently. Maybe I have my say on Nvidia some other time.

yes AMD has less resources to focus on their drivers compared to Nvidia excellent driver team. you can see their performance on dx11/opengl is not as good as Nvidia. but for the newer apis the optimization fall more on the developers hands. there will also be bad dx12 implementations as well, since there isnt a true dx12 game built from the ground up yet.

so in other words, you could say AMD have hardware under utilized, that are being utilized now with the new apis.
AMD designed their hardware for the new apis and pushed for advancing the API environment, while Nvidia designed theirs for the now, it's just 2 different approaches.

Pascal will have support for Async on the driver side, so owners dont really have to worry much. for Maxwell i just cant say the same. we also dont know if Async will even be widely used in the future.

here's a post about it from Nvidia reddit:
QUOTE
They don't have dedicated async shaders, period. Async compute (the ability to decouple shading from rendering tasks) is supported, even in Maxwell. Maxwell: each GPC can work on different tasks, independent of other GPCs. Pascal: each SM (SMs are contained in a GPC) can work on different tasks, independent of other SMs. These tasks can be a mix of render and compute and the scheduler dynamically assigns tasks.

AMD has dedicated async shaders where they can offload compute, completely independent of the render pipeline.

It's two different approaches to accomplishing the same thing (decouple compute from render, to better utilize GPU). Async compute itself is NOT a magic bullet, and must be done properly other avoid stalls and can have disastrous effects if done poorly. It's a tool to offload compute tasks to reduce frame times, and like any other tool it can be a detriment if done wrong.

AMD couldn't use the ACEs in DX11, and their compute pipeline suffered because of it (a good chunk of hardware sitting unused). In Vulkan / DX12, they can now and get good performance gains from better GPU utilization. It's basically as simple as saying "throw into compute queue" and done.

Maxwell and Pascal are already near fully utilized. Forcing it on in Maxwell causes inefficiencies (performance loss) since the software scheduler is already dynamically assigning tasks and now you're overriding what it does. Pascal is better, but mostly just because it can be forced to reassign tasks in a more granular manner (by SM, not by entire GPC).

So yes, Nvidia can "do" async compute, no they don't have async shaders, no there is no performance gain on Maxwell (sometimes a detriment), yes Pascal can see minor gains.


This post has been edited by svfn: Jul 14 2016, 12:23 AM
Demonic Wrath
post Jul 13 2016, 11:51 PM

My name so cool
******
Senior Member
1,667 posts

Joined: Jan 2003
From: The Cool Name Place

QUOTE(adilz @ Jul 13 2016, 11:21 PM)
*
Your DirectX photo (comparing DX9, DX11, DX12) is showing the ideal case of usage. In DX11, AMD driver doesn't support Deferred Context, so the GPU gets underutilized because the immediate context doesn't have enough works to be submitted to the GPU.

» Click to show Spoiler - click again to hide... «


NVIDIA driver supported it for DX in driver 337.50 (magical driver against Mantle). Same case with AMD's OpenGL driver.

See this latest Digital Foundry video (at 4:39). See that CPU frame-time massive decrease for AMD FuryX when going from OpenGL to Vulkan?
» Click to show Spoiler - click again to hide... «


In DX11, deferred context support is optional.

What happens on DX12/Vulkan is that the work submission is handled by the app now. So, the task goes to the programmer. IHV only needs to have a driver that recognizes those data and queues. biggrin.gif

Also, Doom Vulkan uses some AMD shader extension (AMD specific) to improve their performance too. NV Vulkan driver doesn't support a lot of shader extensions and also doesn't have compute-only queue. Their Vulkan driver is in a very immature stage.
x5_416
post Jul 13 2016, 11:55 PM

Getting Started
**
Junior Member
111 posts

Joined: Mar 2006
From: Season 1


QUOTE(kevyeoh @ Jul 13 2016, 11:21 PM)
hi, may i know where to get 1070 @ RM1.8k? is it via gemfive? but that is limited only, already sold out...wondering if we can get it from a normal shop with such price...

thanks!
*
the promotion is until 18th july, i also quickly ordered one as soon i saw them restock...
svfn
post Jul 13 2016, 11:56 PM

On my way
****
Junior Member
500 posts

Joined: Oct 2015
From: Penang
QUOTE(Demonic Wrath @ Jul 13 2016, 11:51 PM)
What happens on DX12/Vulkan is that the work submission is handled by the app now. So, the task goes to the programmer. IHV only needs to have a driver that recognizes those data and queues. biggrin.gif

Also, Doom Vulkan uses some AMD shader extension (AMD specific) to improve their performance too. NV Vulkan driver doesn't support a lot of shader extensions and also doesn't have compute-only queue. Their Vulkan driver is in a very immature stage.
*
must have faith in NV driver team!
SUSg104
post Jul 14 2016, 12:13 AM

On my way
****
Senior Member
657 posts

Joined: Mar 2016
adilz FYI, adoredtv is a AMD fanboy, this fella always said how great AMD cos NV don't even care bout it.
Demonic Wrath
post Jul 14 2016, 12:39 AM

My name so cool
******
Senior Member
1,667 posts

Joined: Jan 2003
From: The Cool Name Place

QUOTE(svfn @ Jul 13 2016, 11:47 PM)
yes AMD has less resources to focus on their drivers compared to Nvidia excellent driver team. you can see their performance on dx11/opengl is not as good as Nvidia. for the newer apis the optimization fall more on developers hands. there will also be bad dx12 implementations as well, since there isnt a true dx12 game built from the ground up yet. Pascal will have support for Async on the driver side, so owners dont really have to worry much. for Maxwell i just cant say the same. we also dont know if Async will even be widely used in the future.

here's a post about it from Nvidia reddit:
*
In both Maxwell and Pascal, it is the same at SM level. Each SM can work on different workload. It is not separated at GPC level. Changes from Maxwell to Pascal are within the SM and also the Gigathread Engine. I'd guess the SMs now have a buffer to store it's "partial" result when doing context switching between graphics/compute workload.

In Maxwell, a context switch can only be made during drawcalls. Imagine: First drawcall: 10 SMs dedicated to graphic load, 6 SMs to compute load. Second drawcall: 14 SMs dedicated to graphic load, 2 SMs to compute load. It will be bad if in first drawcall, the compute load takes longer to finish. The 10 SMs will be idling and go to Hawaii for vacation.

In Pascal, a context switch can be made on instruction/pixel level. Imagine: it can draw pixels halfway and then it can switch to compute load, finish it, then resume the drawing.

Regardless of GCN or SM architecture, a context switch is required to change from graphics to compute workload. Within a AMD CU (64 cores), it can't do graphic/compute concurrently (i.e. 40 cores on graphic, 24 cores on compute). It can, however, do graphics halfway, pause it, switch to compute load, finish it, then resume the graphic.

In this regard, Pascal and GCN works very similarly. In DX12/Vulkan "async compute enabled", without a compute-only queue, NVIDIA GPUs will idle more as there is now fences (fences basically just say "hey, don't submit anymore work until you have the result from the previous work"), this is bad and reduces workload to the GPU. Hence, performance drops.

What NVIDIA prefer is to just keep submitting work to the GPU and let their hardware scheduler figure out how to distribute and dispatch the workload (This is also a feature to keep their GPU busy in DX11).

TL:DR summary : As long as the codes are not programmed in such a way that it stalls the SM, Maxwell should still perform good.

In practical however, graphic workload and compute workload doesn't always end together. There will be some difference on the duration to complete. For example, graphic workload might take 5ms to complete cool2.gif , compute workload might take 8ms to complete ranting.gif . So there will be 3ms idle for some SMs. Total job duration: 8ms.

Pascal is the better solution compared to Maxwell. For example, graphic workload takes 5ms to complete. Compute workload not yet complete. The idle SMs can be retasked to complete the compute workload faster, say instead of 8ms, it will only take 6ms. Total job duration: 6ms. cool2.gif

It is also more resilient to badly coded codes. In Maxwell, the driver will just crash since it detects something is not right and stalling the SMs.
Skylinestar
post Jul 14 2016, 01:48 AM

Mega Duck
********
All Stars
10,479 posts

Joined: Jan 2003
From: Sarawak
QUOTE(Moogle Stiltzkin @ Jul 13 2016, 11:11 PM)
anyone read this yet?  sad.gif
is TSSAA any good or rubbish ?
*
https://www.reddit.com/r/Doom/comments/4qte...up_reflections/

SMAA
user posted image


TSSAA
user posted image

But the AA quality is thumbup.gif
svfn
post Jul 14 2016, 03:02 AM

On my way
****
Junior Member
500 posts

Joined: Oct 2015
From: Penang
QUOTE(Demonic Wrath @ Jul 14 2016, 12:39 AM)
In this regard, Pascal and GCN works very similarly. In DX12/Vulkan "async compute enabled", without a compute-only queue, NVIDIA GPUs will idle more as there is now fences (fences basically just say "hey, don't submit anymore work until you have the result from the previous work"), this is bad and reduces workload to the GPU. Hence, performance drops.

What NVIDIA prefer is to just keep submitting work to the GPU and let their hardware scheduler figure out how to distribute and dispatch the workload (This is also a feature to keep their GPU busy in DX11).

TL:DR summary : As long as the codes are not programmed in such a way that it stalls the SM, Maxwell should still perform good.

In practical however, graphic workload and compute workload doesn't always end together. There will be some difference on the duration to complete. For example, graphic workload might take 5ms to complete cool2.gif , compute workload might take 8ms to complete ranting.gif . So there will be 3ms idle for some SMs. Total job duration: 8ms.

Pascal is the better solution compared to Maxwell. For example, graphic workload takes 5ms to complete. Compute workload not yet complete. The idle SMs can be retasked to complete the compute workload faster, say instead of 8ms, it will only take 6ms. Total job duration: 6ms. cool2.gif

It is also more resilient to badly coded codes. In Maxwell, the driver will just crash since it detects something is not right and stalling the SMs.
*
I dont think Nvidia had hardware scheduler since Fermi, it was removed in Kepler, a decision by NV. For Maxwell it will be more stable if you stick to dx11, depending on the title and how good its dx12 implementation is.
adilz
post Jul 14 2016, 03:44 AM

Getting Started
**
Junior Member
267 posts

Joined: Oct 2007
From: Kuala Lumpur, Malaysia


QUOTE(Demonic Wrath @ Jul 14 2016, 12:39 AM)
Regardless of GCN or SM architecture, a context switch is required to change from graphics to compute workload. Within a AMD CU (64 cores), it can't do graphic/compute concurrently (i.e. 40 cores on graphic, 24 cores on compute). It can, however, do graphics halfway, pause it, switch to compute load, finish it, then resume the graphic.
*
You got me a bit confused here. When I started reading about Async Compute even before I got my ex GTX 970s earlier this year, all the articles and forums I read said otherwise. I can't list or recall all the articles or forums, but here's one example GTX 1080 Async Compute - Eteknix - my understanding SM/ CU : Stream Multiprocessor/ Compute Unit

Anyway, in a broader context of Async Compute implementation, yes I agree that Nvida Maxwell and Pascal has that too, albeit a very different path from AMD. Just that in the specific context of DX12/ Vulkan Async Compute feature that we've seen so far, Nvidia GPUs does not take advantage of it like AMD GCN cards do.

Because Nvidia has its own way of doing Async Compute (the eteknix arcticle has details) which is not tied to the APIs (DX11, DX12, Vulkan or OpenGL), at least to me, it explains why there's not much difference in performance running in DX11 vs DX12 or Vulkan vs OpenGL. Pascal saw some improvement in DX12 but I attribute that to other DX12 or Vulkan API feature optimization but nothing do with the those new API's Async Compute implementation. To 2 recent events with RotTR and Doom patch update to me proof of that.

I think Nvidia is already coming to the end of squeezing what ever performance they can out of Maxwell and their focus now is more on Pascal. News like the one officially from Bethesda that they still trying to get Async Compute to work with Nvidia at least offers a glimmer of hope, though I suspect if there's any to be gained, it will be just for Pascal. That's the reason why I sold my GTX 970s just after 6 months and went for GTX 1080. I must admit here that I had some regrets buying the the GTX 970 but its my fault. Seeing the "DirectX 12" on the GTX 970, I thought I will get to enjoy the benefit of what the new API can bring. But alas, it means that the card will run DX12 games, but just don't expect it to take advantage of it.

I also have to agree with some articles that Nvidia's response to recent events with DX12/ Vulkan patches has been pretty anemic, which to me as a user of Nvidia card, is kinda disappointing. A comment or response saying that they are working at least would show that it matters to them. Otherwise its like Nvidia telling me "Look, I already gave you the best GPU money can buy, so stop whining and go play your games". I can only hope Nvidia proves me wrong.

That youtube video I mentioned in a earlier post said something about Vulkan and NVidia. Quick check and it seems like Nvidia is headingn and pushing this Vulkan API initiatives. Like this article Nvidia Release Vulkan Driver Irony is, a developer made full use of what Vulkan API can offer but AMD's the one getting the benefit. Again, the b****y async compute thingy

This post has been edited by adilz: Jul 14 2016, 03:53 AM
Demonic Wrath
post Jul 14 2016, 07:47 AM

My name so cool
******
Senior Member
1,667 posts

Joined: Jan 2003
From: The Cool Name Place

QUOTE(svfn @ Jul 14 2016, 03:02 AM)
I dont think Nvidia had hardware scheduler since Fermi, it was removed in Kepler, a decision by NV. For Maxwell it will be more stable if you stick to dx11, depending on the title and how good its dx12 implementation is.
*
On NVIDIA GPUs, there's a hardware scheduler. It is the Gigathread Engine. This Gigathread Engine block is equivalent to (graphic processor + ACEs) in AMD GPUs. NVIDIA doesn't show the internals of their Gigathread Engine. In Vulkan, they have exposed 16 graphic queues and 2 copy engine in the latest driver.

Drivers only compile the task lists and send to the GPU (host to device). How it distribute the tasks is managed by the hardware scheduler (Work Distributor) on the GPU.

This might be clearer with all the diagrams: https://developer.nvidia.com/content/life-t...ogical-pipeline

QUOTE(adilz @ Jul 14 2016, 03:44 AM)
You got me a bit confused here. When I started reading about Async Compute even before I got my ex GTX 970s earlier this year, all the articles and forums I read said otherwise. I can't list or recall all the articles or forums, but here's one example GTX 1080 Async Compute - Eteknix - my understanding SM/ CU : Stream Multiprocessor/ Compute Unit

*
Yes. AMD CU is roughly equivalent to NVIDIA SMM (The internals are different). Example, R9 390X has 44 CUs.

I mean, in a single AMD Compute Unit (CU), the workload is either graphic or compute too. The 64 cores in a single CU will all be doing the same type of workload. It needs to do context switch when changing from graphic or compute too. But different CUs can work on graphic and compute in parallel.
Someonesim
post Jul 14 2016, 07:49 AM

In my way
*******
Senior Member
9,132 posts

Joined: Aug 2005



QUOTE(Topet @ Jul 13 2016, 10:36 PM)
1.4k for rx480 and 1.8k for gtx1070...i don't think its really long just thicker at 2.5 slot it uses..
*
2.5 slots I'm OK as my setup got 3 slots space. Length I think should be OK, I just scared height. The PCB and cooler are really tall, I'm quite sure it wont fit my puny case, as the PCI-E power facing top cry.gif
Is there L shape PCI-E power connector ?

user posted image

QUOTE(kevyeoh @ Jul 13 2016, 11:21 PM)
hi, may i know where to get 1070 @ RM1.8k? is it via gemfive? but that is limited only, already sold out...wondering if we can get it from a normal shop with such price...

thanks!
*
Just try monitor on daily basis. Last time when I knew about this, sold out. Then check back one/two days later and restock biggrin.gif

QUOTE(x5_416 @ Jul 13 2016, 11:55 PM)
the promotion is until 18th july, i also quickly ordered one as soon i saw them restock...
*
Oh yeah, after 18 July then newer coupon, either better or worse, or could be same. Gemfive IINM always have coupon code discount, fight Lazada !

127 Pages « < 47 48 49 50 51 > » Top
Topic ClosedOptions
 

Change to:
| Lo-Fi Version
0.0352sec    0.50    6 queries    GZIP Disabled
Time is now: 21st December 2025 - 03:20 AM