Archived from groups: alt.comp.hardware.overclocking.amd (
More info?)
I know, bad form to follow myself up, but I got to thinking about what
you said below:
In article <gYpXe.253811$E95.200845@fed1read01>, nobody@nobody.there
says...
>
>
>
> Seems like that's something from the very early internet days. Shouldn't
> kill a server nowadays.
>
>
A lot of the worlds Usenet users are still on dial-up connections.
They may have to pay by the minute connection charges. Why should they
have to pay more than once to download a message?
Did a little digging. Yesterdays total news feed was over 1 Terabyte
of data. You can do the math if you want, but lets just say that works
out to roughly 102Mbs a second. What goes in must go out so lets say
conservatively 204Mbs. Thats a full feed in and a full feed going out
to peers, other news providers, and readers. For one day. Ten days
average retention across all groups, plus back ups, 20 Terabytes.
That's a lot of 147Mb SCSI hard drives at a $1000 USD a pop.
Now text groups are probably only one percent of that. The rest is
binaries. Thats 5 DS3 lines at 45Mbs at least because there's the need
for back up lines. Actually this is really conservative, because BIG
news providers probably deal with oc48s and higher because they resell
to a lot of smaller news providers. And they probably get charged not
only for the data line, but for the amount of date that moves over it.
Now you've got to store this data for a while. At 1 percent of the
news feed the news providers can be generous with hard drive storage.
News servers and server quality hard drives aren't cheap. Include the
software that runs them. News servers aren't off the shelf items,
they're built to spec. I've heard numbers of starting at a quarter mil
apiece.
This newsgroup, on Supernews, has 500.5 days of storage with a total
of 12622 messages. Which is why you see people say Google for it or
check the archives. Unless it's something relatively new, it'd probably
been discussed before and the data is already out there. But, I
digresse.
The news admins have a saying, "It doen't scale." A few people
multiposting is no big deal but it's a bad habit to get into.
As more people start doing it the data retention drops because there
quotas on the drives as to how much space a group can have.
As an extreme example take a look at some of the binary groups. The
retention there is measured in hours. Why, because some body will
upload a 4 GB movie in 50 parts and one of the parts won't make it, or
becomes corrupted. Then somebody will ask for a repost and 10 people
might just post the required part and 2 more will repost the entire 50
parts.
Now you scale up to 25 people uploading 25 different 4 Gb movies. Some
groups get so much traffic the first parts start aging off the server
before the last parts get uploaded, so more people start asking for
reposts. Next thing you know somebody creates alt.binaries.x.repost to
handle the reposts to get the retention up.
Say it's "The Way We Were" with Babs, should it be crossposted, or
multiposted to alt.binaries.dvd, alt.binaries.movies.repost,
alt.binaries.barbra.streisand?
Thats a minimum of 4GB to 12GB of that terabyte feed going in the
upload, who knows how many GB in the downloads.
Lather,rinse,repeat.
Bad analogy ahead warning.
If I invite somebody into my home and they put their feet up on the
table I'll ask them to take them off. If they take off their shoes, put
their feet back up on the table and say there can't scratch it now,
I'll ask them to leave and not invite them back. They're free to put
their feet up on the table at their home and anywhere else that may
tolerate that sort of boorish behavior.
It wasn't putting their feet up on the table that got them thrown out,
it was insisting they should be able to do it after I asked them not
to.
If somebody multiposts after being asked not to and they insist on
doing it, nobody can stop them, but I don't have to see their
posts/boorish behavior any more. My house, my computer, my rules.
Y.M.M.V.
Bill