Sign in with
Sign up | Sign in
Your question

1080p and 1366x768?

Tags:
Last response: in Laptops & Notebooks
Share
July 30, 2012 5:28:54 PM

Hi everyone,

I feel like this is a question that is based at least on some solid thought processes, but on the other hand I can't help but feel I'll get laughed off the internet if I ask it :na: 

Would a laptop with 1366x768 resolution be able to play 1080p videos? If so, would they be true 1080p?

Thanks in advance, lol

More about : 1080p 1366x768

July 30, 2012 5:36:59 PM

Yes, most/all new laptops can play well beyond 1080p videos, and no the playback wouldn't be 1080p but 768p, so scaled down.
m
0
l
Related resources
July 30, 2012 5:45:03 PM

1366x768 and 1080p(1920x1080) is same ratio, 16:9 So 1080p will just fit with laptop screen.
m
0
l
July 30, 2012 5:45:07 PM

Its the players job to scale the video down to fit on the screen. For example when you grab the corner of media player and make the window fit in the corner of the screen. MS media play just scaled that video to 480x270 or what ever the size of you box was.

In short the video will play and look great but it will only play at the max resolution of your monitor.
m
0
l
a b D Laptop
July 30, 2012 6:09:00 PM

Which laptop are you looking at, and how much are you paying for it?

The objective answer to your question is yes, 1920x1080 media will scale down to 1366x768, and no, it will not be true "1080p".

But if it is reasonable to do so (and if this laptop is 15.6"), then you should make a point to avoid 1366x768 resolution in a 15.6" display. 15.6" 1366x768 displays make things onscreen large, and tend to have very poor image quality due to low contrast. This, however, is not the case for 15.6" 1920x1080 displays.
m
0
l
a b D Laptop
July 30, 2012 6:09:23 PM

I'm pretty sure that you at least get the full quality audio that comes with higher quality videos even if the video gets down-scaled.
m
0
l
July 30, 2012 7:46:51 PM

edit1754 said:
Which laptop are you looking at, and how much are you paying for it?

The objective answer to your question is yes, 1920x1080 media will scale down to 1366x768, and no, it will not be true "1080p".

But if it is reasonable to do so (and if this laptop is 15.6"), then you should make a point to avoid 1366x768 resolution in a 15.6" display. 15.6" 1366x768 displays make things onscreen large, and tend to have very poor image quality due to low contrast. This, however, is not the case for 15.6" 1920x1080 displays.

http://shop.lenovo.com/us/laptops/ideapad/y-series/y570

I know, I know. There is a good deal floating around out there for a better HP with a 1920x1080 screen... but I just don't really like it that much. I used one of my friend's laptops for a fairly long period of time (1366x768, 15.6", same as the Lenovo) and honestly there was never a point where I had a negative thought about the screen, and I'm very aware when it comes to visuals.
m
0
l
a b D Laptop
July 30, 2012 8:03:59 PM

Consider the ASUS N53SM-AS51 instead.
- Since version of the GT 555M included with the Y570 is actually a misbranded higher-clocked GT 540M / GT 630M, the GT 630M GPU in this ASUS is actually somewhat similar. The GT 630M and the Y570's GT 555M both have 96 Shaders, and both receive lower benchmark scores than the 'true' GT 555M with 144 lower-clocked shaders does. The Y570's GPU tends to benchmark closer to a GT 630M than to a 'true' GT 555M. The Lenovo's GPU does have faster memory though.
- Buying from Newegg does not incur sales tax charges, but buying from Lenovo.com does. Therefore, the price difference is actually less than it appears to be.
- The ASUS N53SM-AS51 includes a Core i5 processor, not a Core i7. However, it should be one of the least of your concerns which type of processor you get. Gaming is bottlencked by the GPU far before it matters what type of CPU you have, especially when you have a lower-midrange GPU such as the ones in question here. And for basic usage (multitasking, movie watching, MS Office, email, web browsing, etc.), it makes essentially no difference what type of processor you have. Any perceived slowness during basic usage will be due to the hard drive speed, and can generally only be remedied by installing an SSD.

http://www.amazon.com/gp/product/B007MW73A4/ref=s9_simh...

But one of the biggest issues with 15.6" 1366x768 displays, even if you don't notice how little they allow you to fit onscreen, is that they generally cannot properly reproduce dark colors since they tend to be cheap low-tier LCD panels. Go find a 15.6" 1366x768 laptop, and display a dark image on the screen. You should notice the issue right away that black isn't dark, and actually appears as a grayish, purple-ish, or bluish gradient from the top to the bottom of the screen-- sometimes with a quick "dark point" after which it starts getting lighter again. This will change depending on how you tilt the screen. You will really notice the issue if you compare it to any decent desktop monitor.
m
0
l
a b D Laptop
July 30, 2012 8:10:57 PM

Sorry, but shaders do not matter more than clocks for GPUs. As shader count increases, performance scaling form that shader count drops exponentially because there are limits to how parallel a job can be run. Clock frequencies scale up performance almost perfectly all of the time unless the memory bandwidth hold the GPU back. This is why the 1280 shader Radeon 7870 at 1GHz can just about match the 1792 shader Radeon 7950 that's at 80MHz. The 7950's shader count advantage is much higher than the 7870's clock frequency advantage and the 7950 even has a substantial memory bandwidth advantage, yet the 7950 is not noticeably faster than the 7870.

This phenomenon's exponential nature means that low shader counts can scale better than higher ones (going from 512 to 1024 scales better than going from 1024 to 2048), so low end GPUs have this effect minimized in comparison to higher end GPUs, but they still don't scale from shader count increases quite as well as from sheer clock frequency increases. The same is much less true for embarrassingly parallel compute tasks, but it is nearly unavoidable with gaming performance.
m
0
l
a b D Laptop
July 30, 2012 8:15:54 PM

I phrased it misleadingly. I just burnt through about 30 threads I had open in tabs, so I probably took a couple shortcuts in the way I phrased some things. Updated post.
m
0
l
July 30, 2012 8:34:26 PM

blazorthon said:
Sorry, but shaders do not matter more than clocks for GPUs. As shader count increases, performance scaling form that shader count drops exponentially because there are limits to how parallel a job can be run. Clock frequencies scale up performance almost perfectly all of the time unless the memory bandwidth hold the GPU back. This is why the 1280 shader Radeon 7870 at 1GHz can just about match the 1792 shader Radeon 7950 that's at 80MHz. The 7950's shader count advantage is much higher than the 7870's clock frequency advantage and the 7950 even has a substantial memory bandwidth advantage, yet the 7950 is not noticeably faster than the 7870.

This phenomenon's exponential nature means that low shader counts can scale better than higher ones (going from 512 to 1024 scales better than going from 1024 to 2048), so low end GPUs have this effect minimized in comparison to higher end GPUs, but they still don't scale from shader count increases quite as well as from sheer clock frequency increases. The same is much less true for embarrassingly parallel compute tasks, but it is nearly unavoidable with gaming performance.

I honestly don't know a lot about computers; could you do your best to explain this in a "simpler" way?

If it matters, these are the specs for the 555m, with the one in the Lenovo bolded:

144 cores 709MHz (GF106), 128Bit GDDR5, e.g. MSI GX780
144 cores 590MHz (GF106), 192Bit DDR3, e.g. Dell XPS 17, Alienware M14x
144 cores 590MHz (GF106), 128Bit DDR3, e.g. Schenker XMG A501 / A701 (Clevo W150HRM / W170HN)
96 cores 753MHz (GF108), 128Bit GDDR5, e.g. Lenovo Y570p / Y560p
144 cores 525 MHz (GF116), 128 Bit DDR3, e.g. Medion Akoya P6812
m
0
l
a b D Laptop
July 30, 2012 8:51:08 PM

wheresperry said:
I honestly don't know a lot about computers; could you do your best to explain this in a "simpler" way?

If it matters, these are the specs for the 555m, with the one in the Lenovo bolded:

144 cores 709MHz (GF106), 128Bit GDDR5, e.g. MSI GX780
144 cores 590MHz (GF106), 192Bit DDR3, e.g. Dell XPS 17, Alienware M14x
144 cores 590MHz (GF106), 128Bit DDR3, e.g. Schenker XMG A501 / A701 (Clevo W150HRM / W170HN)
96 cores 753MHz (GF108), 128Bit GDDR5, e.g. Lenovo Y570p / Y560p
144 cores 525 MHz (GF116), 128 Bit DDR3, e.g. Medion Akoya P6812


http://en.wikipedia.org/wiki/Amdahl%27s_law
That gives a more detailed explanation of it. As you spread a task among more and more threads, it gets more and more difficult to not only utilize them well with complex tasks, but with a GPU, the cores aren't the only aspect of performance even within the hardware to worry about. No matter how many cores you have, there are also the ROPs and more to consider. Increasing the GPU frequency will also increase the frequency of the other hardware, but increasing core count doesn't increase their performance when you don't also increase he number of ROPs and memory bandwidth to accommodate them. This is just one example. All of this along with the link that I posted at the top of this post cause GPU shader scaling to get worse and worse, among even more reasons (such as the increasing distance as measured in transistors between each part of hardware).

I'm not very familiar with many lower end mobile Nvidia cards, but if I had to guess, I'd think that the MSI would somewhat beat the Lenovo which would beat the rest of the other laptops more significantly. The Lenovo's GPU as the higher frequency, but a less than 10% higher frequency is unlikely to beat a 35% core count advantage. However, that would just be in terms of raw graphics performance... Which laptop is overall better for you might not depend strictly on the graphics performance. How the mentioned Asus fits in here, I would need more time to look into and I'm out of time for the day until much later tonight.
m
0
l
July 30, 2012 9:23:32 PM

blazorthon said:
http://en.wikipedia.org/wiki/Amdahl%27s_law
That gives a more detailed explanation of it. As you spread a task among more and more threads, it gets more and more difficult to not only utilize them well with complex tasks, but with a GPU, the cores aren't the only aspect of performance even within the hardware to worry about. No matter how many cores you have, there are also the ROPs and more to consider. Increasing the GPU frequency will also increase the frequency of the other hardware, but increasing core count doesn't increase their performance when you don't also increase he number of ROPs and memory bandwidth to accommodate them. This is just one example. All of this along with the link that I posted at the top of this post cause GPU shader scaling to get worse and worse, among even more reasons (such as the increasing distance as measured in transistors between each part of hardware).

I'm not very familiar with many lower end mobile Nvidia cards, but if I had to guess, I'd think that the MSI would somewhat beat the Lenovo which would beat the rest of the other laptops more significantly. The Lenovo's GPU as the higher frequency, but a less than 10% higher frequency is unlikely to beat a 35% core count advantage. However, that would just be in terms of raw graphics performance... Which laptop is overall better for you might not depend strictly on the graphics performance. How the mentioned Asus fits in here, I would need more time to look into and I'm out of time for the day until much later tonight.

Thanks.

1st bolded: Could you elaborate on this a bit? I think I get what you're saying, but the wording might be a little weird and I want to make sure.

2nd bolded: This is the thing, I've fallen in love with the Lenovo. The Asus, while a great laptop and obviously a very good company, just seems a bit... I don't know... soulless/generic to me. I'd be willing to slightly compromise to get the Lenovo, which I would thoroughly enjoy using.

How much better would you say the Asus performs regarding gaming than this Lenovo?

http://shop.lenovo.com/SEUILibrary/controller/e/web/Len...

(coupon gets it down to $700 flat)

Thanks for the feedback everyone.
m
0
l
a b D Laptop
July 31, 2012 1:18:54 AM

The ASUS doesn't perform better for gaming, but it looks better for gaming and for everything else due to its significantly better display quality, and it is also better for general usage because you can fit more on your screen.
m
0
l
a b D Laptop
July 31, 2012 1:46:47 AM

wheresperry said:
Thanks.

1st bolded: Could you elaborate on this a bit? I think I get what you're saying, but the wording might be a little weird and I want to make sure.

2nd bolded: This is the thing, I've fallen in love with the Lenovo. The Asus, while a great laptop and obviously a very good company, just seems a bit... I don't know... soulless/generic to me. I'd be willing to slightly compromise to get the Lenovo, which I would thoroughly enjoy using.

How much better would you say the Asus performs regarding gaming than this Lenovo?

http://shop.lenovo.com/SEUILibrary/controller/e/web/Len...

(coupon gets it down to $700 flat)

Thanks for the feedback everyone.


1. I was saying that I don't deal much with Nvidia's low end graphics cards (I'm working slowly into them), but judging from the specs of each graphics setup, the graphics in the MSI beats the Lenovo's graphics somewhat and the rest of the laptops don't even come close to the MSI and the Lenovo.

I think that edit1754 answered your other question excellently.
m
0
l
July 31, 2012 2:14:57 AM

blazorthon said:
Sorry, but shaders do not matter more than clocks for GPUs. As shader count increases, performance scaling form that shader count drops exponentially because there are limits to how parallel a job can be run. Clock frequencies scale up performance almost perfectly all of the time unless the memory bandwidth hold the GPU back. This is why the 1280 shader Radeon 7870 at 1GHz can just about match the 1792 shader Radeon 7950 that's at 80MHz. The 7950's shader count advantage is much higher than the 7870's clock frequency advantage and the 7950 even has a substantial memory bandwidth advantage, yet the 7950 is not noticeably faster than the 7870.

This phenomenon's exponential nature means that low shader counts can scale better than higher ones (going from 512 to 1024 scales better than going from 1024 to 2048), so low end GPUs have this effect minimized in comparison to higher end GPUs, but they still don't scale from shader count increases quite as well as from sheer clock frequency increases. The same is much less true for embarrassingly parallel compute tasks, but it is nearly unavoidable with gaming performance.


then how come the new 680 has 1500 plus cores?
m
0
l
a b D Laptop
July 31, 2012 2:25:31 AM

cbrunnem said:
then how come the new 680 has 1500 plus cores?


What do you mean? The 680 has 1512 CUDA cores simply because that is how many Nvidia chose to have. Shader count increases don't scale perfectly, but they do scale upwards and are an easy way to do so. What do you think is an easier way to get say another 40% performance, increasing the core count, or redesigning the cores for greater performance and/or die-shrinking them (still meaning that a redesign is necessary, just not as much of one if you go strictly for a die shrink and not for a new architecture)? The answer is simple. It is much easier and cheaper in R&D to make a larger GPU with more cores than it is to die-shrink a GPU or even worse, creating a new architecture. Companies have to do these things often anyway because increasing core count can only be done to so much of an extent, but it does make for a great way to distinguish between lower end and higher end cards.
m
0
l
July 31, 2012 2:28:12 AM

blazorthon said:
What do you mean? The 680 has 1512 CUDA cores simply because that is how many Nvidia chose to have. Shader count increases don't scale perfectly, but they do scale upwards and are an easy way to do so. What do you think is an easier way to get say another 40% performance, increasing the core count, or redesigning the cores for greater performance and/or die-shrinking them (still meaning that a redesign is necessary, just not as much of one if you go strictly for a die shrink and not for a new architecture)? The answer is simple. It is much easier and cheaper in R&D to make a larger GPU with more cores than it is to die-shrink a GPU or even worse, creating a new architecture. Companies have to do these things often anyway because increasing core count can only be done to so much of an extent, but it does make for a great way to distinguish between lower end and higher end cards.


ah i thought you where talking about core counts. should have read more carefully
m
0
l
a b D Laptop
July 31, 2012 2:30:34 AM

cbrunnem said:
ah i thought you where talking about core counts. should have read more carefully


Shaders and CUDA cores are both cores. AMD uses shader cores and Nvidia uses CUDA cores, but they're both just two different types of floating point math processing cores.
m
0
l
!