That said, a better approach would be to limit kids under certain age from owning smartphones with full internet access. Instead, they could have a phone without internet access—dumb phones—or ones with curated/limited access.
Personally, I'm not too worried about what risqué stuff they'll see online especially so teenagers (they'll find that one way or other) but it's more about the distraction smartphones cause.
Thinking back to my teenage years I'm almost certain I would have been tempted to waste too much time online when it would have been better for me to be doing homework or playing sport.
It goes without saying that smartphones are designed to be addictive and we need to protect kids more from this addiction than from from bad online content. That's not to say they should have unfettered access to extreme content, they should not.
It seems to me that having access to only filtered IP addresses would be a better solution.
This ill-considerd gut reaction involving the whole community isn't a sensible decision if for no other reason than it allows sites like Google to sap up even more of a user's personal information.
Complains about mollycoddling.
> a better approach would be to limit
Immediately proposes new mollycoddling scheme.
Yes, right now search engines are only going to blur out images and turn on safe search, but the decision to show or hide information in safe search has alarming grey areas.
Examples of things that might be hidden and which someone might want to access anonymously are services relating to sexual health, news stories involving political violence, LGBTQ content, or certain resources relating to domestic violence.
seams like long term slow burn to Gov tendrils just like digital ID and how desperate the example came across as to show any real function, contradictory even.
Pivot, what about the children. small steps and right back on the gradient of slippyslope we are
This wouldn't allow them to watch gambling ads or enjoy murdoch venues.
Yes, that empire exported itself to where it would have the greatest effect—cause the most damage.
This would be completely and utterly unenforceable in any capacity. Budget smartphones are cheap enough and ubiquitious enough that children don't need your permission or help to get one. Just as I didnt need my parents assistance to have three different mobile phones in high school when as far as they knew, I had zero phones.
That is true. I spent my time coding a 2D game engine on an 486, it eventually went nowhere, but it was still cool to do. But if I had the internet then, all that energy would have been put into pointless internet stuff.
And for me it was a place to explore my passions way better than any library in a small city in Poland would allow.
And sure - also a ton of time on internet games / MUDs, chatrooms etc.
And internet allowed me to publish my programs, written in Delphi, since I was 13-14yo, and meet other programmers on Usenet.
On the other hand, if not for internet, I might socialise way more irl - probably doing thing that were way less intelectually developing (but more socially).
It just hit me that I need to ask one of my friends from that time what they did in their spare time, because I honestly have no idea.
The worst content out there is typically data-heavy, the best - not necessarily, as it can well be text in most cases.
This seems out of place and unrelated. If anything Gen Z and presumable Alpha, eventually, are more religious than their parents.
2027: the companies providing the logins must provide government with the identities
2028: because VPNs are being used to circumvent the law, if the logging entity knows you're an Australian citizen, even if you're not in Australia or using an Aussie IP address then they must still apply the law
2030: you must be logged in to visit these specific sites where you might see naked boobies, and if you're under age you can't - those sites must enforce logins and age limits
2031: Australian ISPs must enforce the login restrictions because some sites are refusing to and there are loopholes
2033: Australian ISPs must provide the government with a list of people who visited this list of specific sites, with dates and times of those visits
2035: you must be logged in to visit these other specific sites, regardless of your age
2036: you must have a valid login with one of these providers in order to use the internet
2037: all visits to all sites must be logged in
2038: all visits to all sites will be recorded
2039: this list of sites cannot be visited by any Australian of any age
2040: all visits to all sites will be reported to the government
2042: your browser history may be used as evidence in a criminal case
Australian politicians, police, and a good chunk of the population would love this.
Australia is quietly extremely authoritarian. It's all "beer and barbies on the beach" but that's all actually illegal.
> 2038: all visits to all sites will be recorded
That's been the case since 2015. ISPs are required to record customer ID, record date, time and IP address and retain it for two years to be accessed by government agencies. It was meant to be gated by warrants, but a bunch of non-law-enforcement entities applied for warrantless access, including local councils, the RSPCA (animal protection charity), and fucking greyhound racing. It's ancient history, so I'm not sure if they were able to do so. The abuse loopholes might finally be closed up soon though.
https://privacy108.com.au/insights/metadata-access/
https://delimiter.com.au/2016/01/18/61-agencies-apply-for-me...
https://www.abc.net.au/news/2016-01-18/government-releases-l...
https://ia.acs.org.au/article/2023/government-acts-to-finall...
We already reached that point several years ago.
Block lists are not new. For example Italy blocks a number of sites, usually at DNS level with the cooperation of ISPs and DNS services. You can autotranslate this article from 2024 to get the gist of what is being blocked and why https://www.money.it/elenco-siti-vietati-italia-vengono-pers...
I believe other countries of the same area block sites for similar reasons.
I would like to say "It is all because of X political party!" but both the majors are the same in this regard and they usually vote unanimously on these things.
Some states in the US are doing this already. And I think I saw a headline about some country in Europe trying to put Twitter in that category, implying they have such rules there already.
Not quietly, I don't think. Not like Australia is known for freedom and human rights. It's known for expeditionary wars, human rights abuses, jailing whistleblowers and protesters, protecting war criminals, environmental and social destruction, and following the United States like a puppy.
As others have said, that's the case already and not just in Australia. Same in lots of other places like the UK and the whole EU. Less so in the US (though they can demand any data the ISP has, and require ISPs to collect data on individuals)
> Australia is quietly extremely authoritarian.
It is weird, as a recent-ish migrant I do agree, there are rules for absolutely bloody everything here and the population seems in general to be very keen on "Ban it!" as a solution to everything.
It's also rife with regulatory capture - Ah, no mate, you can't change that light fitting yourself, gotta get a registered sparky in for that or you can cop a huge fine. New tap? You have to be kidding me, no, you need a registered plumber to do anything more than plunger your toilet, and we only just legalised that in Western Australia last year.
It's been said before, but at some point the great Aussie Larrikin just died. The Wowsers won and most of them don't even know they're wowsers.
It seems quite likely that governments want to continuously chip away at privacy.
Not a convincing take.
How can you argue any of this is NOT in the interest of centralised surveillance and advertising identities for ADULTS when there’s such an easy way to bypass the regulation if you’re a child?
Can’t say I blame them.
This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.
But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.
The actual goal is, as always, complete control over what Australians can see and do on the internet, and complete knowledge of what we see and do on the internet.
p.s. i agree with your comment.
Manufactured by whom? Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.
The magic solution is if you can't operate at scale safely, don't operate at scale.
https://en.wikipedia.org/wiki/Manufacturing_Consent
> Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.
The difference isn't the scale of Google, it's the scale of the internet.
Back in the day the internet was full of university professors and telecommunications operators. Now it has Russian hackers and an entire battalion of shady SEO specialists.
If you want to build a search engine that competes with Google, it doesn't matter if you have 0.1% of the users and 0.001% of the market cap, you're still expected to index the whole internet. Which nobody could possibly do by hand anymore.
Edit: you can’t just grow a Wikipedia link to manufacturing consent from the 80s as an explanation here. What a joke of a position. Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.
Do you dispute the thesis of the book? Moral panics have always been used to sell both newspapers and bad laws.
> Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.
People have never liked what kids are exposed to. But it rather matters whether the proposed solution has more costs than effectiveness.
> Maybe search is dead but doesn’t know it yet.
Maybe some people who prefer the cathedral to the bazaar would prefer that. But ability of the public to discover anything outside of what the priests deign to tell them isn't something we should give up without a fight.
I put it to you, similarly without evidence, that your support for unfettered filth freedom is the result of a process of manufacturing consent now that American big tech dominates.
Meanwhile morals panics are at least as old as the Salem Witch Trials.
It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.
Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.
> Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.
Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.
Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.
Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.
I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.
Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.
> Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.
> Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.
> Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”
> After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.
As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.
What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.
Some times, but clearly not often enough.
Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?
> Then you report something real and it gets lost in an sea of fake reports.
It didn’t get ‘lost’ — they (or their contract content moderators at Concentrix in the Phillipines) sat on it, and then sent a message that said they had decided to not do anything about it.
> What are they supposed to do about that?
They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?
That's assuming their "review team" actually reviewed it before sending that message and purposely chose to allow it to stay up knowing that it was a false negative. But that seems pretty unlikely compared to the alternative where the reviewers were overwhelmed and making determinations without doing a real review, or doing one so cursory the error was done blind.
> They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?
Almost certainly the second one. What would even be their motive to do the first one? Pedos are a blight that can't possibly be generating enough ad revenue through normal usage to make up for all the trouble they are, even under the assumption that the company has no moral compass whatsoever.
If the system is pathologically unable to deal with false reports to the extent that moderation has effectively ground to a standstill perhaps the regulator ought to get involved at that point and force the company to either change its ways or go out of business trying?
This isn't evidence that they have a system for taking down content without a huge number of false positives. It's evidence that the previous administrators of Twitter were willing to suffer a huge number of false positives around accusations of racism and the current administrators are willing to suffer them around accusations of underaged content.
In the context of Australia objecting to lack of moderation I'm not sure it matters. It seems reasonable for a government to set minimum standards which companies that wish to operate within their territory must abide by. If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market. Or perhaps they would instead choose to charge users for the service? Either outcome would make room for fairly priced local alternatives to gain traction.
This seems like a case of free trade enabling an inferior American product to be subsidized by the vendor thereby undercutting any potential for a local industry. The underlying issue feels roughly analogous to GDPR except that this time the legislation is terrible and will almost certainly make society worse off in various ways if it passes.
It is in combination with the high rate of false positives, unless you think the false positives were intentional.
> If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market.
If they actually required both removal of all offending content and a low false positive rate (e.g. by allowing customers to sue them for damages for removals of lawful content) then the services would exit the market because nobody could do that.
What they'll typically do instead is accept the high false positive rate rather than leave the market, and then the service remains but becomes plagued by innocent users being victimized by capricious and overly aggressive moderation tactics. But local alternatives couldn't do any better under the same constraints, so you're still stuck with a trash fire.
E.g. if you produce eggs and you can't avoid salmonella at some point your operation should be shut down.
Facebook and its ilk have massive profits, they can afford more moderators.
By this principle the government can't operate the criminal justice system anymore because it has too many false positives and uncaptured negative externalities and then you don't have anything to use to tell Facebook to censor things.
> Facebook and its ilk have massive profits, they can afford more moderators.
They have large absolute profits because of the large number of users but the profit per user is in the neighborhood of $1/month. How much human moderation do you think you can get for that?
Obviously we make case by case decisions regarding such things. There are plenty of ways in which governments could act that populations in the west generally deem unacceptable. Private prisons in the US, for example, are quite controversial at present.
It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.
You can make case by case decisions regarding individual aspects of the system, but no modern criminal justice system exists that has never put an innocent person behind bars, much less on trial. Fiddling with the details can get you better or worse but it can't get you something that satisfies the principle that you can't operate if you can't operate without ever doing any harm to anyone. Which implies that principle is unreasonable and isn't of any use in other contexts either.
> It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.
The premise there is that you could solve the problem for $30 per person annually, i.e. less than $2.50/month. I'm left asking the question again, how much human moderation do you expect to get for that?
Meanwhile, that's $30 per service. That's going to increase the network effect of any existing service because each additional recurring fee or requirement to submit payment data is a deterrent to using another one. Are you sure you want to entrench the incumbents as a permanent oligarchy?
Do like banks: Know Your Customer. If someone performs a crime using your assets, you are required to supply evidence to the police. You then ban the person from using your assets. If someone makes false claims, ban that person from making reports.
Now your rate of false positives is low enough to handle.
But also, your proposal would deter people from reporting crimes because they're not only hesitant to give randos or mass surveillance corporations their social security numbers, they may fear retaliation from the criminals if it leaks.
And the same thing happens for people posting content -- identity verification is a deterrent to posting -- which is even worse than a false positive because it's invisible and you don't have the capacity to discover or address it.
Moderation is hard when you prioritise growth and ad revenue over moderation, certainly.
We know a good solution - throw a lot of manpower at it. That may not be feasible for the giant platforms...
Oh no.
Typically you would exempt smaller services from such legislation. That's the route Texas took with HB 20.
My contention is more that they don’t have the will, because it would impact profits and that it’s possible that if they did implement effective moderation at scale it might hurt their bottom line so much they are unable to keep operating.
Further, that I would not lament such a passing.
I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.
Find a way, or stop operating that service.
As but one possible example. Common infrastructure to handle whitelisting would probably go a long way here. Just being able to tag a phone, for example, as being possessed by a minor would enable all sorts of voluntary filtering with only minimal cooperation required.
Many sites already have "are you 18 or older" type banners on entry. Imagine if those same sites attached a plaintext flag to all of their traffic so the ISP, home firewall, school firewall, or anyone else would then know to filter that stream for certain (tagged) accounts.
I doubt that's the best way to go about it but there's so much focus on other solutions that are more cumbersome and invasive so I thought it would be interesting to write out the hypothetical.
Seems like right now the Aus Government isn't sure how they want it to work and is currently trialing some things. But it does seem like they at least don't want social media sites collecting ID.
I guess if a teenager is enterprising enough to get a job and save up and buy their own devices and pay for their own internet then more power to them.
Why is this even controverse. Is there any rational reason why kids should have smartphones? The only reason I see is to let the big companies earn money and because adults don't want to admit, that they are addicted themselves.
https://www.intelligence.gov.au/news/asio-annual-threat-asse...
At the time it was obvious to many astute observers what was happening but governments themselves were mesmerized and awed by Big Tech.
A 20-plus year delay in applying regulations means it'll be a long hard road to put the genie back in tbe bottle. For starters, there's too much money now tied up in these trillion-dollar companies, to disrupt their income would mean shareholders and even whole economies would be affected.
Fixing the problem will be damn hard.
(It may be the last thing that the US has the world lead on)
It's also why legislation protecting privacy and/or preventing the trade of personal information is almost impossible: the "right" people profit from it, and the industry around it has grown large enough that it would have non-trivial economic effects if it were destroyed (no matter how much it thoroughly deserves to be destroyed with fire).
It seems like it would make more sense to implement it at the browser level. Let the website return a header (ala RTA) or trigger some JavaScript API o indicate that the browser should block the tab until the user verifies their age.
IMO an "ok" solution to the parents' requirements of "I want my kids to not watch disturbing things" might be to enforce domain tags (violence, sex, guns, religion, social media, drugs, gambling, whatever) and allow ISPs to set filters per paying client, so people don't have to setup filters on their own (but they can).
But it's a complex topic, and IMO a simpler solution is to just not let kids alone in the internet until you trust them enough.
…
It pushes for heavy content filtering, age checks, and algorithm tweaks to hide certain results. That means more data tracking and less control over what users see. Plus, regulators can order stuff to be removed from search results, which edges into censorship. Sets the stage for broader control, surveillance, and over-moderation. slowburn additions all stack up. digital ID ,NBN monopoly ISP locked DNS servers . TR-069 etc etc. Hidden VOIP credentials. Australia is like the west's testing ground this kind of policy it seams.
While I yearn for the more authentic and sincere days of the internet I grew up on, I recognize very quickly by visiting x or facebook how much it isn’t that, and hasn’t been for a long time.
I think this bill is a good thing and I support it.
In the days before electronics were endemic, physically checking a photo ID didn't run afoul of that as long as the person checking didn't record the serial number. But that's no longer the world we live in.
Same here. Early on, if I found a site interesting I'd often follow its links to other sites and so on down into places that the Establishment would deem unacceptable but I'd not worry too much about it.
Nowadays, I just assume authorities of all types are hovering over every mouse click I make. Not only is this horrible but it also robbs one of one's autonomy.
It won't be long before we're handing info that was once commonplace in textbooks around in secret.
> Drafting of the code was co-led by Digital Industry Group Inc. (DIGI), which was contacted for comment as it counts Google, Microsoft, and Yahoo among its members.
That would have the same effect.
Most legislation aims to create the offence of misleading, not actually stamp out 100% of offenders. Kids who get round this will make liabilities for themselves and their parents.
Unrelated, but why I don't agree:
The systems which permit voting down stupid laws also permit voting down good laws. This is very "be careful what you wish for" and reductive to "the voter is always right even when they want stupid things" interpretation of democracy.
E.g. Swiss cantons opposing votes for women inside the last 2 decades.
Apologies. I'm already pretty morose over the USA Supreme Court allowing age verification, which although claiming to target porn seems so likely to cudgel any "adult" or sexual material at all.
Until recently the Declaration of Independence of Cyberspace has held pretty true. The online world has seen various regulations but mostly it's been taxes and businesses affected, and here we see a turn where humanity is now denied access by their governments, where we are no longer allowed to connect or to share, not without flashing our government verified id. It's such a sad lowering of the world, to such absolutely loser politicians doing such bitter pathetic anti governance for such low reasons. They impinge on the fundamental dignity & respect inherent on mankind here, in these intrusions into how we may think and connect.
Links for recent Texas age verification: https://www.wired.com/story/us-supreme-court-porn-age-verifi... https://news.ycombinator.com/item?id=44397799
It isn’t. For as long as I can remember it’s been wildly authoritarian, and it seems Australians harbour a fetish for the rules that would make even the average German blush.
Hopefully times have changed (though I don’t think they have), but about 20 years ago, standard fare on the road was to provide essentially no driver training, and then aggressively enforce draconian traffic rules. New drivers can’t drive at night. New drivers have to abide by lower speed limits than other drivers. Police stop traffic for random breathalyser tests. “Double demerit” days…
This seems like more of the same. Forget trying to educate the population about the dangers of free access to information (which they will encounter anyway). Just go full Orwell! What could go wrong!
/s
Better I give a little bit of pii than some kid grows up too early.
Would you be able to tell the difference if this policy came from a place of compassion?
Nothing says “not living in Soviet Russia” like having to show your papers to access information.
I really wish all this time, effort, and money was spent on educating our kids to safely navigate the online world.
It's not like they'll magically figure it out for themselves once they turn 17.
The UK PM and the AU PM backed the US position and sent troops in (in the AU case they even sent in advance rangers | commandos | SASR to scout and call targets from ground) but they were both aware the "justification" and WMD claims were BS.
What you describe is more like the debate on continental Europe, which translated in little support (most countries provided help with logistics and minimal "peacekeeping").
https://www.greenleft.org.au/content/halliburton-australia-p...
Been ongoing for a while now: https://roncobb.net/img/cartoons/aus/k5092-on-Tucker_Box-cuu...
This has lead to serious problems in the case of the Afghan war, where it was clear that this whole conflict had nothing to do with Australia, could not even vaguely be construed as "defence", achieved nothing, cost Australian lives, and was a completely fabricated mess that we got into for really bad reasons (I paraphrase). The SAS war crimes thing was a symptom of our unease at our involvement (imho) - we would not normally question the things that soldiers do in conflict, this was more a way of questioning why we were in the conflict in the first place.
Afterwards the same people who employed this rhetoric claimed they, "Always knew the claims were false".
There was definite risk of loss of political capital for would be dissenters. Politicians may or may not have had skeptical reservations. It is moot point if they didn't proactively dissent. Similarly, it isn't especially meaningful in the context of this discussion if those who did dissent were locked out of popular media discourse. The overall media environment repeated the claims unquestioningly. Dissent was maligned as conspiracy theory.
Another interesting manifestation were those who claimed that WMDs were found. Clearly the goal posts were shifted here. Between those who were "always suspicious" and those who believe that the standards of WMDs were met, very few people remain who concede that they were hoodwinked by the propaganda narrative. Yet at the same time, it isn't a stretch to observe that a war or series of wars was started based on false premises. No one has been held to account.
Nothing screams “not living in Soviet Russia” like having a ministry of truth.
I don't see kids being banned from reading history books, which would be more like the world you're describing, I see a country which is pretty multicultural and open minded trying it's best to protect itself from the absolute nonsense that circulates online. When I was a kid, I could only watch certain TV shows because my bed time was 7:30-8pm, that's when the "naughty stuff" came on TV. Was that the ministry of truth at work?
Do you have any idea what kids are exposed to now ? I mean the answer is probably, no, you have no idea. But judging by the rot I see my younger friends and family members watch and regurgitate, I can tell you, it's not great.
Nothing screams "fear mongering" like comparing with living in Soviet Russia.
Look, we can argue all day. There is no right or wrong answer. I don't fully support the govts initiative but I also don't want Meta/X/Google to have unlimited powers like they do in the US.
Various large US tech companies played a central role in drafting this initiative. I don't think you're reasoning about this clearly.
How exactly does this curtail their powers?
I agree though, most information is misinformation, even the most popular stuff, Joe Rogan et al.
And at no point does it ever occur to you to demand proof that measures such as this will have the desired effect... or, indeed, that the desired effect is indeed worth achieving at all.
I am for anonymous tokens ideally but something is still better than nothing
You probably should have started your censorship campaign with the usual bugaboos -- comics, video games, porno mags -- and not with history books.