OpenCL vs./and Cuda Performance in Mid-Range Graphics Cards

majestictorment

Distinguished
Jul 24, 2010
12
0
18,520
Hello Community,

I am looking to replace my FirePro V4800 with a better mid range "gaming" card for content creation.

* I want to move away from the professional cards and back into the gaming segment for more VRAM (at least 3GB) for larger modelling assemblies in Rhino, 3Ds-Max and Revit (just now learning it). I also use AutoCAD and Photoshop. I know the Pro cards drivers are supposedly more streamlined for what I am doing, but my ATI 4850 does just as good of a job in these programs--if not better. *

I'm looking to upgrade now because I'm interested in OpenCL support for rendering (I'm using Vray and looking to move to RT [always open for different rendering engine suggestions]). It's amazing that 300 dollar graphic cards render 20 times faster than 300 dollar CPUs.

Now that you know where I'm at I have been looking at the 660, 660Ti, 570 (where they're left), 7870, and the 7950.

All of my Google research sends me to articles and forums from 2010 and 2011. None have proved very helpful -- mostly clowns talking nonsense.

Benchmarks in OpenCL are proving inconclusive at best with ATI trashing Nvidia and Nvidia trashing ATI at specific tasks.

What should I be looking for at the OpenCL benchmarks at this point with the software listed above?

It seems they're two camps -- CUDA camp that says "stay away from Keplar and go Fermi cause Keplar's CUDA cores are weak." However the OpenGL camp says,"damn dude, just look at the 7800 series benchmarks trashing Nvida back to the stone age."

I'm torn. CUDA has been around longer and is generally accepted and therefore the least risk. OpenGL seems more of the future even though Nvidia seems to be doing everything to fight it.

My system is:
AMD FX 8350
24 GB DDR
Samsung 840 250GB SSD
ATI FirePro V4800 (replacement budget not to exceed $300)

FYI, I don't do much gaming. My idea of gaming is Infinitely Modded SimCity 4 and some Dawn of Discovery. My board supports Crossfire but not SLI if that helps...

Thanks in advance!
 

chuyayala

Honorable
Oct 29, 2012
21
0
10,520
OpenCL has been around for a while, but it looks like the industry is moving towards OpenCL since its an open standard vs CUDA. There is also some speculation that many games may run more efficiently on AMD architecture since the PS4 (and presumably the next generation xbox will be running on AMD hardware). I highly doubt there will be a difference though and it all depends on the software you currently use and what features it takes advantage of. Adobe recently started implementing OpenCL but unsure about the rest. If i had to suggest an AMD card I would suggest the 7870XT (Tahiti LE) since it has a SUBSTANTIALLY higher double precision floating point potential than the 7870 Ghz edition without that much of a significant price jump. I'm not sure how accurate these numbers are but here are the numbers: http://en.wikipedia.org/wiki/Radeon_HD_7000_Series

Edit: I realized you can't exceed $300. In that case i recommend the 7790 at the low end since it has similar compute capabilities as the 7850 (according to recent Tomshardware benchmarks).

Edit2: Definately a good deal: http://www.tomshardware.com/news/powercolor-deal-coupon-code-newegg,21692.html
 

majestictorment

Distinguished
Jul 24, 2010
12
0
18,520


Thank you. I was just looking at the 7870 Tahiti LE's. I wonder about driver support for them...I'm not sure how AMD does their driver, but they seem off the grid (which I'm totally cool with).

Strongly considering the Tahiti LE since that is a really great deal (Thanks for code!).
 

majestictorment

Distinguished
Jul 24, 2010
12
0
18,520


OpenCL

http://en.wikipedia.org/wiki/OpenCL
http://www.tomshardware.com/reviews/geforce-gtx-650-ti-benchmark-gk106,3318-14.html

I'm trying to take advantage of the OpenCL platform for rendering instead of my CPU (basically using the CUDA cores or Stream Processors), however not all 3D programs have adopted OpenCL. For example, Chaos Group which makes Vray RT even states on there site to go with nvidia, even though they somewhat support OpenGL.

http://chaosgroup.com/en/2/vray_features.html (scroll to the bottom)
http://www.youtube.com/watch?v=6ZmuI2xQp2M (all the bells and whistles are in the CUDA mode, not exactly streamlined for Radeon [or so it seems?]).
 

majestictorment

Distinguished
Jul 24, 2010
12
0
18,520


Basically. Although it doesn't make since because the AMD hardware does so much better in theoretical benchmarking than there Nvidia counterparts. But real world performance is always different isn't it? It's as if these software developers are all frozen in time, or more likely OpenCL just isn't as stable or developed as CUDA.

If I did go CUDA what would you all recommend? I've seen benchmarks where a 570 is beating the 660ti in CUDA performance despite 3 times more CUDA cores.
 

lloydloo

Honorable
Jan 7, 2013
92
0
10,640
Well for OpenCl, I'd recommend an AMD card cause they are kknown to have far better compute performance than the gtx 600 series. If you go CUDA, I'd recommend the gtx 570 because the gtx 600 series have SERVERELEY crippled compute performance which makes them suck at OpenCl and CUDA. For amd, get the hd 7950 and for nvidia get a 570.
 
Basically. Although it doesn't make since because the AMD hardware does so much better in theoretical benchmarking than there Nvidia counterparts.

i think maybe you're a bit confuse about this open stuff. amd hardware are much better in OpenCL than nvidia hardware has nothing to do with nvidia being better at OpenGL.

It's as if these software developers are all frozen in time, or more likely OpenCL just isn't as stable or developed as CUDA.

they way i understands is the development on OpenCL might be slower to CUDA due to it's nature being open platform. also the adoption rate is depending on how hard the platform being pushed by hardware manufacturer as well.

I've seen benchmarks where a 570 is beating the 660ti in CUDA performance despite 3 times more CUDA cores.

nvidia make big changes to CUDA core when moving from Fermi to Kepler. it might have more cores but generally it was weaker than Fermi cores. that's why GTX680 does not 3 times faster than GTX580 despite having three times more CUDA cores. about the compute performance (DP) it is to be expected the Geforce 600 series to have weaker compute performance compared to GTX580/570 (GF110). GK104 (and below) that being use in 600 series is much more like GF104/104 from fermi family which is geared towards gaming performance. the true successor to GF110 is GK110 that being use in GTX Titan and Tesla K20 series. so if you're going CUDA and really need DP then using GF110 based cards might make much more sense for you.
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


All of the apps you mention use CUDA. OpenCL will be behind this tech for ages as there is NO money behind it. Basically if your app uses cuda nothing else will beat it. This is the product of 7yrs of financing it by NV & 500 universities teaching it in 26 countries. All major rending apps (3d, video etc) use cuda. OpenCL has no where near the performance of cuda nor is it used in as many apps as cuda. That's what you get when something is free. Cuda can get you far better results than OpenCL purely because there is a lot of money and development behind it. OpenCL is to Cuda for professional apps, as Linux is to Directx for gaming. They are currently not even in the same league.

Where Cuda can be used vs. OpenCL I challenge anyone to find Cuda getting beat by OpenCL. You should find OpenGL doing the same. It's just more supported. If you stray outside the main pro apps your mileage may vary (and is more up to whoever coded the app), but if you are using a professions rending, photo or video editing app you'll find almost all support cuda. NV will keep putting money behind cuda forever. They have no interest in speeding up OpenCL (unless forced) and AMD has no money to spend on making OpenCL beat Cuda. Top that off with the fact that NV owns 65% of the discreet market and you have a no brainer in any big app optimizing for NV hardware (like AMD or NV, it doesn't matter, app optimization goes where the hardware customers are). NV should always be tested on cuda where available but Tom's seems to ignore it, but choosing apps 95% of their audience couldn't care less about. They should be testing Blender, 3Dsmax, Adobe apps etc which all use cuda and either opengl/opencl to support AMD. That is the REAL picture.

You would never use OpenCL in Adobe CS suite for NV. You would perhaps for AMD, but you'd quickly turn on Cuda for NV. I guess they just don't want to show AMD getting the A$$ handed to them. Vegas pro 12 uses cuda too, but I don't think tom's turned it on in their article this last week. Odd correct? They continue to run crap like folding@home etc that nobody makes money from. Why not just run the MONEY making apps? Sure you could say the contest wouldn't be fair, but that's just how it is in reality. EVERYTHING you make money on runs CUDA after 7yrs of NV throwing money at the devs and schools. So it is an unfair fight by far, but it's REALITY for users and what you'd buy if going pro work stuff. You're crazy to bring home an AMD card for money making. Years ago it was a fairly even fight depending on app, but not any more. It's cuda everywhere while AMD tries to be relevant in anything but folding@home and Bitcoin mining. Neither of which gets you a job and bitcoin mining is only successful on botnets now :) When you see OpenCL being taught in 500 schools across the world, maybe they're in the race again ;) That won't happen until AMD is bought by someone with billions behind them (IBM/Samsung/Apple/Qcom - Hope for one of these to buy them!).
http://www.blenderguru.com/4-easy-ways-to-speed-up-cycles/
12x faster in blender than cpu
http://www.tomshardware.com/reviews/radeon-hd-7790-bonaire-performance,3462-8.html
OpenGL in APPS, look at autodad. Pro people optimize for NV. Only when Toms goes to dumb sythetics that mean nothing does AMD compete. Blown away in the REAL APP autocad though. But even in sythetics in opengl he has to say: "It looks like OpenGL performance is a sore spot for AMD‘s current Radeon cards, at least in Windows. "...
Yes it always has been NV strong suit. Because before cuda everyone wrote for opengl in pro apps. But now you get 12x etc from cuda so opengl vs. cuda is as dumb as opencl vs. cuda when you have a choice.
http://www.tomshardware.com/reviews/adobe-cs5-cuda-64-bit,2770-8.html
Cuda in PS CS5 2.5 years ago, only better now in CS6. Note the CPU utilization tanks too, so you can use it for other stuff while rendering etc.

In the 7790 article this week they used ratgpu (opencl only raytrace crap app), bitcoin mining (no real work there either), and Luxmark...LOL. Why not AdobeCS6, Blender, 3DSMAX, Autocad, Vray etc? They all support CUDA that's why (and some form for AMD to use also, opengl or opencl can also be used in all).
http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-16.html
That about sums up nvenc vs. AMD whatever also. All the big apps again support cuda or at least opengl. Sorenson squeeze, Sony, Premiere, Cyberlink, etc for vid.
http://developer.download.nvidia.com/GTC/PDF/GTC2012/PresentationPDF/S0603-Monday-GPU-Ray-Tracing.pdf
Check page 8 for all the ray tracing apps. Note there are only TWO for openCL and a dozen for cuda etc...Why show openCL here then? See the point. Reality is a LOT different than Toms shows. Luxmark is the WORST you can choose from the list as it's opencl only. No money behind it, so crap support for devices. Fire up Iray, Vray in 3Dsmax etc and the story goes to 4-12x faster for NV :) I wish they'd quit highlighting opensource crapware here at Toms as if anyone buying expensive hardware wouldn't buy the REAL APPS for work with them.

 

majestictorment

Distinguished
Jul 24, 2010
12
0
18,520


^ This ^ is one of the greatest hardware rants I have ever read. If any of you read it don't dismiss it as fan boy nonsense because somebody special has an excellent point. I believe I have wasted a good 30 hours of work this week and another 20 at home researching this and have come to an important conclusion.

I have to change my software.

I am emphatically convinced now that OpenCL is the way of the future. With AMD, Apple, Microsoft and now Autodesk pushing this it's only inevitable. OpenCL performance when engineered correctly dominates CUDA. Software developers are quickly realizing this. AMD is ready and Nvidia is not (as they are still pushing CUDA).

http://semiaccurate.com/2012/06/11/amd-and-autodesk-speed-up-maya-with-opencl/#.UVT-uBw3trk

Look at this site dedicated to OpenCL benchmarking,
http://clbenchmark.com/compare.jsp?config_0=14092108&config_1=12081435

What I know for sure now is, if you are serious about using CUDA and don't want to spend more than $400 get a 580 or 590. Otherwise you're in Titan land. If you are like me and want to stay below 300 dollars, you are constrained to the 560ti or 570. That's fine if you are using small models but if you are working on larger and larger assemblies you need 2 or 3gb of VRAM. Now were back in the PRO cards with memory restrictions.

http://www.legitreviews.com/article/2070/5/
As you see the PRO card dominates in viewport stuff but gets trashed in OpenCL rendering. While its important to be able to build the model you're about to render, you make your money by being able to render the damn thing!

I almost bought the 7870 LE Myst, but stalled and it sold out. I almost bought the 660ti 3gb, but changed my mind and almost bought a 7950. I stalled again and almost bought a 580 off eBay. Almost blew my budget, power supply and electric bill on a 590...

I'm just not happy. This reminds me of buying a car. Just nothing satisfying me at my price point. I suppose that was designed by Nvidia to make me spend 800-2,000 on a video card.

I'm in a hold pattern for now. I have found that Revit acutally works better on AMD so that's a plus. Rhino and Sketchup have been running just fine on my V4800 and my 4850. If GPU rendering is what I am after then I have no choice left but to get a 7950 with new software or run 3 580s to match the 7950's performance.

Obvious right? Or wait for Kepler to "increase compute performance" whenever that will be...

http://en.wikipedia.org/wiki/GeForce_700_Series

I will keep you all updated, but it's looking like the 7950. Waiting to see if some good Easter deals pop up and then it's on to downloading a bunch of trial software!

Thanks to everyone who contributed. If you want to still chime in feel free!
 

Idonno

Distinguished
Jan 3, 2011
694
0
19,060

Although Adobe still doesn't support AMD compute (openCL) for PC's, the newer Nvidia cards are almost as useless with adobe support and Adobe Premiere Pro and AMD have teamed up to bring support for the open standard to Windows with the software's next version.
Articles: http://www.engadget.com/2013/04/06/adobe-premiere-pro-windows-opencl-support/
http://www.anandtech.com/show/6881/opencl-support-coming-to-adobe-premiere-pro-for-windows
It's true that the money and power of important hardware manufacturers has an extreme amount of influence but, that only goes so far. The software manufacturers aren't going to keep backing only CUDA at it's present performance levels because they need to stay relevant with their consumers more than any other consideration. Vegas Pro 12 supports openCL, Adobe is on the way and the rest will soon fall into place like domino's. The last thing these software manufactures want is to become irrelevant.
So IMO AMD is the only reasonable choice for your new card.
 
why is it nvidia future cards will be useless? adobe are working to integrate OpenCL into their software doesn't mean they will drop CUDA altogether. also when integrating CUDA into their software they do not do it alone. they got nvidia back them up just like amd back them up to add proper OpenCL support to their softwares. also from the link you post above:

Performance aside, it’s interesting to note that it looks like Adobe will be keeping their CUDA code path, as AMD’s test configurations indicate that the NVIDIA cards are using the CUDA code path even on Premiere Pro Next. Having separate code paths is not all that unusual in the professional world, as in cases like these it means each GPU family gets an optimized code path for maximum performance, but it does mean Adobe is putting in extra work to make it happen.
 

Idonno

Distinguished
Jan 3, 2011
694
0
19,060
First off i said:
The software manufacturers aren't going to keep backing only CUDA at it's present performance levels
The key word there was ONLY. I never said anyone was going to drop CUDA altogether.
However I did say:
Although Adobe still doesn't support AMD compute (openCL) for PC's, the newer Nvidia cards are almost as useless with adobe support
Well i'll give you this "almost as useless" was probably to strong but, when older GTX 580's with 512 CUDA cores are blowing away the new GTX 680's with 1536 CUDA cores in Adobe premier pro you would have to be blind to think that their isn't a serious issue with using Nvidia's new desktop cards for this kind of work.

I also think that this issue is mostly of Nvida's own making to try to entice it's customers into emptying their wallets on a workstation card.

Like I said on another thread:
nvidia was the undisputed king of GPU compute and the 500 series cards still do well for video editing.

Unfortunately Nvidia's 600 series cards are the exact opposite now while AMD has really improved it's GPU compute in the last generation of cards.

Nvidia makes a really nice card but, IMO they have purposely hamstrung CUDA for any thing other than gaming so to sell more Quadro's.

It's a shame, Nvidia's new cards could be quite capable if they weren't hamstrung on purpose. This is evidenced by hacks that some are using to get around Nvidia's self-imposed limitations. Unfortunately these hacks are proving to be of limited effectiveness and are prone to problems.

I do hope that enough people decide not to empty their entire wallet on an overpriced Quadro and buy a much better preforming (for editing w/a desktop card) AMD card.
Then maybe Nvidia will loose enough business to see the error of their ways and start giving their customers what they want again.

I'm not just some AMD fanboy, I really like Nvidia and had they been a little better with 3 monitors when I built this PC I would have bought a GTX 580 for the reasons stated above. Now that I'm thinking of upgrading and Nvidia has improved it's multi-monitor support, this happens.

I would love to see Nvidia regain the direction they were heading with their 500 series cards but, blindly supporting them won't help that happen.


 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


NV isn't fighting OpenGL and AMD doesn't trash them in OpenGL. I think you meant OpenCL which nobody uses really (yet?, doubtful it will every supplant 7yrs of Cuda and being taught in 20+ countries - good luck AMD...LOL). Adobe is just getting on the wagon but cuda is still there, it remains to be seen who is faster. If tom's doesn't benchmark Cuda vs. OpenCL in adobe when it hits I may just quit reading this site like I did back in the sysmark days. OpenGL has worked fine on both for years.

Moral of the story is, if your software supports cuda go NV. PERIOD. Folding@home, bitmining and luxmark opencl benchmarks mean nothing. Cuda is used by pro apps and makes you money. These three don't, and are just benchmarks of what may happen one day (LOL) if AMD ever makes enough to fund the software industry making changes to an already great cuda app for only ~28% of the gpu market. They will aim for 65% NV market and accept the check from them instead. AMD has no check to write so far except for Adobe (wisely they chose to R&D with adobe...it is a highly used app and creates a large portion of internet content). Most pro apps already support Cuda or OpenGL. Why waste money optimizing for OpenCL to make AMD users happy? They are a smaller market, can usually run the same in OpenGL already or you just should have bought NV (I'm guessing this is how most optimization talks go around a programming table when trying to justify costs of helping AMD out).

In Toms article here you can see in the opengl benchmarks perf depends on what you're doing. You can see lightwave doing particularly well in the very specific benchmark they chose. You also see in maya that a 680gtx and Titan are limited by something as Titan outclasses a 680GTX by FAR. I'm thinking they should have ran something else that would separate the cards. Titan should never score a tie with 680, it has far more cuda cores, 6GB or ram etc. This should be no contest or something is holding them back. Look at how Maya flips back and forth in opengl also. It really depends on not only what software you use but even further, how you use it. AMD/NV blow each other away depending on what is being done in the same app...LOL. In the end I go cuda though, because I can't wait for AMD to get more than adobe etc.

Elsewhere you can find a 10-20x perf improvement when using cuda even over intel 6-8 core chips so again, if your app supports cuda, go NV period. If OpenCL gains traction expect a driver to improve all cards from NV shortly after it does so :) They are only holding it back to keep cuda on top IMHO. What do you expect? I don't see microsoft pushing OpenGL for gaming which would kill directX either...LOL. No surprise they will never optimize for an app that already has cuda until forced. Cuda already works great and they paid to allow me to say that :) It runs on 220 of the most popular apps and scientific/engineering software out there. AMD has adobe and a few dumb benchmarks you don't make money on so far. Not much more unfortunately. Note they chose not to show image/video processing here on adobe (rightware instead...LOL). They should have ran OpenGL AMD vs. Cuda on NV in adobe for both of these tests...Rightware shows you nothing. Can I make money on rightware? No because it's just a test. Adobe however, works already and IS optimized heavily and everyone uses it. But it would have shown NV dominates it with cuda currently (well, duh, this is why AMD is helping adobe get opencl out there for adobe compute). If you don't plan to get the next rev of adobe anytime soon, go NV as cuda already works in it today.

Just like waiting for AMD to fix Enduro (see notebookcheck.com and 7970M rehash article), and waiting a year for them to get decent never settle drivers out the door (see hardocp.com articles on drivers in the last month or two covering their crappy situation for a YEAR), you will wait for AMD to pony up cash for each and every app as NV has done over the last 7 years. My radeon 5850 will be my last AMD card for a while until I see the driver situation fixed (should happen since xbox/ps4 are done now-but i'll wait for them to PROVE it). Currently NV will be my next purchase if only for the drivers and cuda. I have complete faith they'll put out a opencl driver worth mentioning if AMD gets more apps optimized for it while at the same time getting devs to DROP cuda. If they can't get that to happen (as adobe still has cuda in the next version coming) I'll still go NV for cuda.

OpenCL isn't being taught at universities everywhere like Cuda is (because nobody funds it!). Open source is only good with money behind it (like linux...many big companies behind it). Without the money, it's a "maybe it will be great one day" affair and benchmarks like rightware/folding@home/bitmining which aren't REAL PRODUCTS they merely show you the "DREAM" that can happen if it ever gets funding for some REAL apps & games to be optimized. Will they ONE day take over the world with OpenCL? Doubtful but none of us know that. Cuda is here in everything NOW. Including sony Vegas Pro 12 Idonno mentions. :) Also it's not useless in Adobe, cuda has worked fine for a years (since CS4).
http://www.sonycreativesoftware.com/vegaspro/gpuacceleration
Not sure where he gets his info, other than just not liking NV I guess? I highly doubt his link is saying 4.5x NV, rather 4.5 faster than AMD in OpenGL alone. I'm guessing a tie vs. NV+Cuda in adobe, but we'll see. Note that post doesn't mention NV, and even if it's great it won't be 4.5x cuda...LOL. I'll have to see that for myself not from a PR.

Personally, I say wait until the next revs xmas/Q1. You make a much wiser choice then and hopefully at 20nm :) I'm waiting myself if I can hold off that long. At the very least wait for the minor refresh coming shortly (next month or two?). All the revs in between die shrinks don't do much, we get great gains at the die shrink (and xmas/Q1 is a die shrink AFAIK).
 

somebodyspecial

Honorable
Sep 20, 2012
1,459
0
11,310


Links to GTX580 vs. GTX680 showing 580 trouncing in Adobe running cuda please? Premiere or Photoshop, or AE. Don't care which, just want to see this in a review with benchmarks.
BTW: Vegas uses Cuda already. v270/285+ series for driver gets it with applicable gpu.
http://www.sonycreativesoftware.com/vegaspro/gpuacceleration
Too bad they didn't use newer cards in their tests they used :(
 

Idonno

Distinguished
Jan 3, 2011
694
0
19,060
Well I certainly don't agree with everything you said, except that your right I did mean openCL (OpenCL is what I had been posting about in my previous posts and what I meant to write in the last) but, I think your missing the big picture. Which is that Nvidia has moved away from GPU computing and I for one think that sucks.

The funny (actually sad) thing here is all the Nvidia fans on this site (and others) absolutely refuse to acknowledge that fact, even though Nvidia itself stated that intention before they even released their latest cards. This kind of reminds me of the sheeple that follow a particular computer company.

If you like a company you shouldn't be complacent when they move away from areas that you feel are important or worse yet, deny It's even happening. If no one speaks out and lets them know their disappointment they will have little incentive to change. :pfff:

Your a big boy you can look for your self but, like the Vegas link you posted they're strangely absent. Might it be that they don't want to offend a particular company that's dumping large amounts of money into CUDA. It's really a no brainier if you were to give it a second or two's thought. What you will find though, is on the adobe forums people that bought 680's are complaining about performance and being told to buy a Quadro or downgrade to a 580 if they want to improve performance.
BTW: I'm well aware of that. Please show me where I even so much as insinuated that It didn't.
Yea, I agree. I wonder why they didn't? Oh, that's right. See above, unless you have a more logical reason? :pt1cable:

Well the first place I got this info was from Nvidia themselves when after the 500 series cards they said they would be focusing less GPU computing in the new GTX 600 series.

As far as "just not liking NV I guess?" Nothing could be further from the truth but, in all honesty I liked the direction they were headed with their GTX500 series cards allot better and it is my hope that enough people won't just be blind fanboys and raise enough of a fuss that Nvidia regains their earlier direction of producing a very strong overall card in the next series and not just a great gamer card. No thanks to the fanboys, just to the realistic people that realize Nvidia's desktop cards have a much greater potential.:kaola:

On the other-hand I'm thrilled to death that AMD has stepped up and improved GPU computing to the extent that they have. It's this kind or competitive attitude that keeps technology progressing at a fast pace and gives consumers the best it has to offer.

I'm not anyone's fanboy and I have no favorites other than the best card for me at the moment. Unlike fanboys, I just want to see everyone put their best foot forward and let the best card win! :D