That’s a pretty big ambiguity in the story!
Uncertainty isn't good for engagement, even if it's correct.
It is not that hard. It is a fruit roughly the same size and appearance as an orange, but more bitter. See! I explained it. :) Joking asside, what are you trying to explain about grapefruit to your elderly parents? Is it the weird way it interacts with certain medicines?
https://pmc.ncbi.nlm.nih.gov/articles/PMC3589309/
>Our research group discovered the interaction between grapefruit and certain medications more than 20 years ago [1990s].1–3 Currently, more than 85 drugs, most of which are available in Canada, are known or predicted to interact with grapefruit. This interaction enhances systemic drug concentration through impaired drug metabolism. Many of the drugs that interact with grapefruit are highly prescribed and are essential for the treatment of important or common medical conditions. Recently, however, a disturbing trend has been seen. Between 2008 and 2012, the number of medications with the potential to interact with grapefruit and cause serious adverse effects (i.e., torsade de pointes, rhabdomyolysis, myelotoxicity, respiratory depression, gastrointestinal bleeding, nephrotoxicity) has increased from 17 to 43, representing an average rate of increase exceeding 6 drugs per year. This increase is a result of the introduction of new chemical entities and formulations.
Obviously that form is ovesimplified. But since I’m not a pharmacist, nor a doctor I can allow this simplification for myself because it “fails-safe”. That is it might make me refrain from eating grapefruit in a situation where I could safely do so, but it will save me from eating grapefruit in situations where it is not safe. It would be harder if I would need to remember that I must eat grapefruit in some situations and can’t eat in other situations.
The reason why I’m saying this is because this is how I would approach explaining this to someone. By oversimplifying to the point where the safe story is easy to remember. People already understand that they can’t mix alcohol and certain medications. So it is just one more thing you can’t mix with medication.
That is my other “tasty fruit with a weirdly dangerous rare side-effect” fact.
> I have to tell my parents to stop reading
As a researcher myself, I really dislike that this is even a thing. I constantly have friends send me articles asking what I think about things (frequently the answer is "I have no idea" and/or "the paper says something different").I'm livid about this because this erodes public trust in science. Worse, people don't see that connection...
I don't understand how major news publications can't be bothered to actually reach out to authors. Or how universities themselves will do that and embellish work. I get that it's "only a little" embellishment, but it's not a slippery slope considering how often we see that compound (and how it is an excuse rather than acting in good faith). The truth is that the public does not understand the difference in the confidence levels of scientists for things like "anthropomorphic climate change" vs "drinking wine and eating chocolate is healthy for you." To them, it's just "scientists" who are some conglomerate. It is so pervasive that I can talk to my parents about something things I have domain expertise and written papers on and they believe scientists are making tons of money over this while I was struggling with student debt. I have to explain when I worked at a national lab isn't full of rich people[0]. There's a lot of easier ways to make money... And my parents, each, made more than any of the scientists I knew...
[0] People I know that have jumped ship and moved from lab to industry 2x-3x their salary (these are people with PhDs btw).
https://www.levels.fyi/companies/oak-ridge-national-laborato...
https://www.levels.fyi/companies/lawrence-livermore-national...
[Side note]: I wish we were able to be more honest in papers too. But I have lots of issues with the review system and the biggest is probably that no one wants to actually make any meaningful changes despite constant failure in the process and widespread frustration.
Of course, we haven't even touched on the replication crisis, of which thankfully my dad is blissfully unaware.
A lesson I continually fail to learn is that it isn't about the actual things. Information is a weapon to many people. Not a thing to chase, to uncover, to discover, but a thing that is concrete and certain. I still fear the man that "knows", since all I can be certain of is that he knows nothing.
The problem is that without science and stem foundations it is very hard for people to even understand what is and isn’t known.
My mum used to send me all kind of articles about chakras. “there are kids born now who have their sixth chakra open” and “this chakra is orange, and that chakra is indigo”. One day my mom, my then girlfriend, and me were chatting about what specialisation my girlfriend is thinking about pursuing at medical school. She told her that she is thinking about specialising in endocrinology. My mom become really angry and cided us for using such “big words” to “lord our education over her”. So to placate her we explained that it is a doctor who studies hormones, measures hormone levels and treats diseases of the hormone system. She got visibly surprised and the only things she asked “you can measure hormones?”
The conversation continued of course but that question, and the genuine surprise on her face remained with me. The thing is, she trully did not know that we can measure hormones. And if you don’t know that hormones are as real as the legs of the table, and chakras are as real as santa claus, then they both sound equally plausible theories about health. And when you race the stories against each other “i’m not feeling well because my heart chakra is blocked, and I need healing crystals and massages to get well again” vs “i’m not feeling well because my thyroid gland is not producing enough thyroxine, and i need to take suplements in a pill form” then the first one wins because it is simpler and neater sounding. But one is kinda bulshit and the other is a real thing. But you won’t know that unless you understand that we can measure hormones, and nobody even has any idea what it would mean to measure a chakra.
Compare to the invention of the perceptron, which took a joint effort between a polymathic neurophysiologist and a logician.
And this all ignores that the authors are PhD scientists. So I'm confused how this is categorized as "medical field" in the first place. I found that the ability to memorize is essentially useless in PhD level biological science (I studied immunology, so I can't necessarily speak to other fields), and it is all systems level conceptualizing.
I think this is a team with many talented people who came together to do their best. But I'm sure I'm naive. There seems to have been a lot of new interest and debate about what is happening in the glymphatics sphere.
Others can be guilty of similar sins, of course, and since the early 20th century, when philosophy and the classical liberal arts in general evaporated from school curricula, scientists have generally been quite poor at this, despite unwittingly treading into subject matters they are ill-prepared to discuss. Compare how a Schroedinger or a Heisenberg[2] talk about philosophical stuff, and then look at someone like Krauss [3]. The former may not have been great philosophical thinkers, but there is a huge difference in basic philosophical education and awareness, and these are not just isolated cases.
[0] https://edwardfeser.blogspot.com/2011/01/against-neurobabble...
To really answer your question, I think I need to talk about the books modern day neuroscientists are writing and I have to say I simply agree. I think these self-help kind of books are not good! Too bad they are so easily propagated in the media.
That being said, I think the rise of "evidence-based" medicine is also causing issues. It gets used as a cop-out to avoid thinking about the mechanics of what is actually happening in an injury. While this is certainly a good things for treatments where A or B superiority is uncertain, there's a lot of cases where I think an RCT just doesn't really make sense.
A pet example:
I broke my ankle recently, and this dug into the literature and common practice. A significant number of people will get end-stage arthritis a few years after "simple" ankle fractures and often the doctors have no idea why. At the same time, an important part of ankle anatomy is often left unfixed (the deltoid ligament) because a few studies back in the 80s found it wasn't necessary to fix it. The bone that serves an equivalent purpose IS fixed (if broken) though. Mechanically, they restrict the ankle joint and prevent it moving in certain directions.
When presented with biomechanical reasons for fixing it, and concurrent common poor outcomes for some patients, I've seen the response from surgeons thusly - "it's not supported by evidence" presumably because there isn't an RCT demonstrating definitive superiority.
So much of medicine and treatment is literally just hearsay and whatever your surgeon happened to read last week. As a whole the standard is rising, but so much research is so disjoint, disorganised and inconsistent that doctors often have no definitive guidance. It's probably more of a problem in some fields (like ortho) than others, but its still surprising when you see it yourself.
sounds similar to the problem with tech coding interviews. ive refactored the backend orchestration software of a SaaS company's primary app and saved 24tb of RAM, while getting 300% faster spinup times for the key part of the customer app, but i bomb interviews because i panic and mix up O(n) for algorithms and forget to add obvious recursion base cases. i know i can practice that stuff and pass, its just frustrating to see folks that have zero concept of distributed systems getting hired because they succeed at this hazing ritual.
but with that said, i suppose no industry or job will ever be free from "no true scottsman" gate-keeping from tenured professionals. hiring someone that potentially knows more than you puts your own job security at risk.
In other fields, it is expected that if you can "talk the talk" you can "walk the walk." Mostly because it is really hard to talk in the right way if you don't have actual experience. Tbh, I think this is true about expertise in any domain. I don't think it is too hard to talk to a programmer about how they'd solve a problem and see the differences between a novice and a veteran.
A traditional engineering interview will have a phone screen and an in person interview. Both of which they'll ask you about a problem similar to one they are working on or recently solved. They'll also typically ask you to explain a recent project of yours. The point is to see how you think and how you overcome challenges, not what you memorize. Memorization comes with repetition, so it's less important. I remember in one phone interview I was asked about something and gave a high level answer and asked if it was okay for me to grab one of the books I had sitting next to me because I earmarked that equation suspecting it would be asked. I was commended for doing so, grabbed my book, and once I reminded myself of the equation (all <<1m?) gave a much more detailed response.
In a PhD level interview, you're probably going to do this and give a talk on your work. Where people ask questions about your work.
IMO the tech interviews are wasteful. They aren't great at achieving their goals and are quite time consuming. General proficiency can be determined in other ways, especially with how prolific GitHub is these days. It's been explained to me that the reason for all this is due to the cost of bad hires. But all this is expensive too, since you are paying for the time of your high cost engineers all throughout this process. If the concern is that firing is so difficult, then I don't think it'd be hard to set policy where new employees are hired in under a "probationary" or "trial" status. It shouldn't take months to hire someone...
I've been asked about my former projects, my roles, what I liked or didn't like about them, how do I approach a new project, what did I find most interesting, etc.
I gather there are a lot of fakers in the software dev world. So maybe that's why more places try to make you prove you can actually write code.
Reaching for a book to answer, makes sense to me. That's what you'd do on the job, and nobody would think less of you for it.
What part is too time consuming? What you describe in the engineering interview sounds like a software engineer interview process as well.
The stereotypical software engineering interview is heavily leetcode dependent. It's why leetcode exists and they can charget $150/yr for people to just study it (time that could be spent on learning other things). I mean somewhere like Google you can have 3-6 rounds in the interviewing process.
[0] Maybe you'll use a board or paper to draw illustrations and help in your explanations, but you're not going to work out problems. No one is going to give you a physics textbook problem and say "Go".
The classic meme is that MDs love organic chemistry, but they hate biochemistry [1], because one is about memorization and the other is...less so, anyway.
But then again, neuroscientists do tend to love their big books of disjointed facts, so maybe it's more like medicine than I realize. I remember the one class I took on neuroscience was incredibly frustrating because of the wild extrapolations they were making from limited, low-quality data [2], that made it almost impossible to form a coherent theory of anything.
[1] ...except for the Krebs cycle! Gotta memorize that thing or we'll never be able to fix broken legs!
[2] "ooh, the fMRI on two people turned slightly pink! significant result!"
It's not impossible for people who are good in memorization to also be good in understanding systems.
Those people, in turn, are the ones doing this research.
Although common, it's not quite so that only people with a pure medical background do neuroscience.
All in all, having met quite some people in the field, the things you're hinting at never occurred to mee as an actual problem. My guess is because the people who actually have issues get weeded out very soon. Like: before even finishing their PhD. It's not an easy field.
and it might not be "good at memorization" that's being selected, it might be "conscientiousness", one of the Big Five, and a relatively important parameter.
> Also note that the medical field selects hard for people who can memorize information, to the exclusion of people who can understand systems.
It isn't limited to the medical field. This is quite common in most fields.I understand testing knowledge and intelligence is an intractable problem, but I my main wish is that this would simply be acknowledged. That things like tests are _guidelines_ rather than _answers_. I believe that if we don't acknowledge the fuzziness of our measurements we become overconfident in them and simply perpetuate Goodhart's Law. There's an irony in that to be more accurate, you need to embrace the noise of the system. Noise being due to either limitations in measurements (i.e. not perfectly aligned. All measurements are proxies. This is "measurement uncertainty") or due to the stochastic nature of what you're testing. Rejecting the noise only makes you less accurate, not more.
I refer to them as "fuzzy databases" (this is a bit more general than transformers too), because they are good at curve fitting. There's a big problem with benchmarks in that most of the models are not falsifiable in their testing. Since it is not open of what they have trained on, you cannot verify that tasks are "zero-shot"[0]. When you can, they usually don't actually look like it. Another example is looking at the HumanEval dataset[1]. Look at those problems and before searching, ask yourself if you really think they will not be on GitHub prior to May 2020. Then go search. You'll find identical solutions (with comments!) as well as similar ones (solution is accepted as long as it works).
IME there's a strong correlation between performance and number of samples. You'll also see strong overfitting to things very common.
That said, I wouldn't say LLMs aren't able to perform novel synthesis. Just that it is highly limited. Needing to be quite similar to the data it was trained on, but they __can__ extrapolate and generate things not in the dataset. After all, it is modeling a continuous function. But they are trained to reflect the dataset and then trained to output according to human preference (which obfuscates evaluation).
Additionally, I wouldn't call LLMs useless nor impressive. Even if they're 'just' "a fuzzy database with a built in human language interface", that is still some Sci-Fi shit right there. I find that wildly impressive despite not believing it is a path to AGI. But it is easy to undervalue something when it is highly overvalued or misrepresented by others. But let's not forget how incredible of a feat of engineering this accomplishment is even if we don't consider it intelligent.
(I am an ML researcher and have developed novel transformer variants)
[0] A zero-shot task is one that it was not trained on AND is "out of distribution." The original introduction used an example of classification where the algorithm was trained to do classification of animals and then they looked to see if it could _cluster_ images of animals that were of distinct classes to those in the training set (e.g. train on cats and dogs. Will it recognize that bears and rabbits are different?). Certainly it can't classify them, as there was no label (but classification is discrimination). Current zero-shot tasks include things like training on LAION and then testing on ImageNet. The problem here is that LAION is text + images and that the class of images are a superset (or has significant overlap) with the classes of images in ImageNet (label + image). So the task might be a bit different, but it should not be surprising that a model trained on "Trying for Tench" paired with an image of a man holding a Tench (fish) works when you try to get it to classify a tench (first label in ImageNet). Same goes for "Goldfish Yellow Comet Goldfish For The Pond Pinterest Goldfish Fish And Comet Goldfish" and "Goldfish" (second label in ImageNet).
(view subset of LAION dataset. Default search for tench) https://huggingface.co/datasets/drhead/laion_hd_21M_deduped/...
(View ImageNet-1k images) https://huggingface.co/datasets/evanarlian/imagenet_1k_resiz...
(ImageNet-1k labels) https://gist.github.com/marodev/7b3ac5f63b0fc5ace84fa723e72e...
But in my experience neuroscientists have to have a solid level of systems thinking to succeed in the field. There are too many factors, related disciplines (from physics to sociology), and levels of analysis to be closed off.
Honestly 'our knowledge of [X] is largely mechanistic and without a sense of the larger picture' is weirdly applicable to most scientific fields once they escaped the 'natural philosophy' designation.
This sounds like one of those complete bullshit memes that certain groups of people like to repeat. Very similar to tech people being "creatives" while other groups like sales are somehow not. Utter bullshit.
> Compare to the invention of the perceptron, which took a joint effort between a polymathic neurophysiologist and a logician.
While cross-field collaboration often yields the best insights, I hope you're not implying that computer scientists are somehow better at "understanding systems" compared to biologists. Not only are computer scientists hugely guilty of pretending that various neural networks are anything at all like the brain (they are not), its also the case that biological systems are fantastically more complicated than any computing system.
I think there's a sense in which that's true (I've especially heard it with respect to the foundations of maths), but I worry about that way of thinking. There absolutely are places where we have consensus, even on subjects of extreme complexity. And the fact that we really do have consensus can be one of the things that's most important to understand. I don't want people doubting our knowledge that, say, too much sugar is bad, that sunscreen is good, that vaccines are real and so on.
A lot of what passes for nuanced decoding of the social and institutional contexts where science really happens, looks to outsiders like "yeah, so everything's fake!"
And when the job of communicating these nuances falls into the hands of people who don't think it's important to draw that distinction, I think that contributes to an erroneous loss of faith in institutional knowledge.
There's a difference between "cigarettes cause cancer" and "phones cause cancer". The former is very definitely true, confirmed by many studies, and the health impact is very significant. The latter is probably untrue (there are studies that go both ways, but the vast majority say "no cancer"). Even if there's any impact, it's extremely minimal when compared to cigarettes.
People can't distinguish between those two levels of "causes cancer" in a headline.
Neuroscience is in the same quadrant of the knowledge / hype plot as nutrition science.
The fact that you had to add the parenthetical here to hedge your bet demonstrates that you don't even entirely believe your own claims.
[0]: https://www.wiley.com/en-us/Philosophical+Foundations+of+Neu...
[1]: https://cup.columbia.edu/book/neuroscience-and-philosophy/97...
The difference between Woo of the Gaps and Science of the Gaps is that science is on the advance and woo is on the retreat, it has been this way for centuries, and the pace always seems to be determined exactly by the rate at which science advances rather than any actual opposition from the Woo camp. Nothing is over until it's over, but how much do you actually want to bet on a glorious turnaround? You do you, but for me the answer is "not much."
If you mean this...
This is why I think strict materialism on consciousness is misguided. People like to think "weve cracked everything scientifically, from quantum physics to neuroscience, so even if we don't have a good explanation for consciousness now, we'll get there." Except the reality is macroscopic neuroscientific findings are incredibly coarse and with many caveats and uncertainties, statements more like "this area of the brain is associated with X" than "this area of the brain causes X." It's not just optimistic - its qualitatively unjustified to think that neuroscience (in its current form, at least) is inevitably capable of cracking consciousness.
Many STEM people hate this because they want to axiomatically believe materialist science can reach everything, despite the evidence to the contrary. shrug
... that wasn't an argument, it was a loosely formed set of vague and unsubstantiated claims. The fact that you immediately deleted it and started insulting anyone who responded to you kind of proves my point. I'm sorry I wasted my time on you, won't happen again.Humans being unable to figure out how inanimate matter gives rise to consciousness is not evidence that "strict materialism on consciousness is misguided". Or is there some other evidence I'm unaware of?
> Many STEM people hate this because they want to axiomatically believe materialist science can reach everything, despite the evidence to the contrary.
Do we have actual evidence that it can't reach everything? That would be "evidence to the contrary". What you have given is evidence of its inability to reach everything so far, in its current form. That's still not nothing - the pure materialists are committed to that position because of their philosophical starting point, not because of empirical evidence, and you show that that's the case. But so far as I know, there is no current evidence that they could never reach that goal.
[Edit to reply, since I'm rate limited: No, sauce for the goose is sauce for the gander. The materialists don't get the freebee, and neither do you. In fact, I was agreeing with you about you pointing out that the materialists were claiming an undeserved freebee. But you don't get the freebee, for the same reason that they don't.]
The philosophical definitions also sometimes preclude any human from being able to meet the standard, e.g. by requiring the ability to solve the halting problem.
Without knowing which thing you mean, we can't confidently say which arrangements of matter are or are not conscious; but we can still be at least moderately confident (for most definitions) that it's something material because various material things can change our consciousness. LSD, for example.
I feel really encouraged here, because I think this example has surfaced recently (to my awareness at least) of a good example of material impacts on conscious states that seems to get through to everybody.
I think the one about drugs is helpful because it speaks to the special things the mind does, the kind of romanticized essentialism that's sometimes attributed to consciousness, in virtue of which it supposedly is beyond the reach of any physicalist accounting or explanation.
Is software electrical? It certainly runs on electrical hardware. And yet, it seems absurdly reductionist to say that software is electrical. It's missing all the ways in which software is not like hardware.
Is consciousness similar? It runs on physical (chemical) hardware. But is it itself physical or chemical? Or is that too reductionist a view?
(Note that there is no claim that software is "woo" or "spirit" or anything like that. It's not just hardware, though.)
> Please don't fulminate. Please don't sneer, including at the rest of the community.
Sure, and maybe Cthulu is about to awaken the sunken city of R'lyeh. You can't prove me wrong either.
https://www.thetransmitter.org/glymphatic-system/new-method-...
> The new paper used many of the techniques incorrectly, says Nedergaard, who says she plans to elaborate on her critiques in her submission to Nature Neuroscience. Injecting straight into the brain, for example, requires more control animals than Franks and his colleagues used, to check for glial scarring and to verify that the amount of dye being injected actually reaches the tissue, she says. The cannula should have been clamped for 30 minutes after fluid injection to ensure there was no backflow, she adds, and the animals in the sleep groups are a model of sleep recovery following five hours of sleep deprivation, not natural sleep—a difference she calls “misleading.”
> “They are unaware of so many basic flaws in the experimental setup that they have,” she says.
> More broadly, measurements taken within the brain cannot demonstrate brain clearance, Nedergaard says. “The idea is, if you have a garbage can and you move it from your kitchen to your garage, you don’t get clean.”
> There are no glymphatic pathways, Nedergaard says, that carry fluid from the injection site deep in the brain to the frontal cortex where the optical measurements occurred. White-matter tracts likely separate the two regions, she adds. “Why would waste go that way?”
>That’s a pretty big ambiguity in the story!
no, it's not: "waste clearance faster during waking than sleep" does not mean it's adequate to the job, and waste clearance at night could still be critically important. We also do not know what the waste consists of comprehensively and having a specific sleep system implies its doing something.
It's a trade-off. The brain is about as large as it can be while making birth possible. It already uses a lot of energy(2% of body weight, 20% of energy consumption). We also need it to be working at peak performance when we are doing activities.
A background 'scrub' task to keep it working 24/7 would probably use more energy (require more food and heat dissipation 24/7), possibly require a larger area (for redundancy, similar to how dolphins can sleep one hemisphere at a time and have really large brains). An alternative would be to slow down processes enough so that those tasks could happen constantly.
And then our day/light cycles helped select for this approach. Until recently there wasn't much one could do (safely!) at night.
I wonder if it had been beneficial to have larger brains, we'd have evolved to support that. Diminishing returns maybe or just a local maximum we didn't get out of?
Interestingly, there seem to be some indications showing that human interventions by modern technology already show clear evolutionary trends: https://pmc.ncbi.nlm.nih.gov/articles/PMC5338417/
Humans might eventually evolve to not even being able to be born naturally anymore at some point.
The continued existence of our species would become dependent upon continued civilisation. A dark age could kill us, or at least cripple the population.
*how true is this? Uni-educated people tend to have lower fertility rates.
So if bigger brains meant people reproducing more, our brains would get bigger to the point that most births are cesarean or something.
I do wonder what happens when we eventually evolve to a point where we can't survive without more and more advanced technology.
A lot of people who would have died off before reproducing 200 years ago now don't, which is of course incredible for us. But what are effects of that 100/1000 years down the line?
Presumably we'll have plenty of more immediately pressing issues over that time frame.
Why? If you can gather fruits or hunt pray while all your competitors (or predators!) are asleep, isn't it an advantage? What about nocturnality? https://en.wikipedia.org/wiki/Nocturnality
At night it is harder to see food. It is harder to see predators, some of whom are in fact nocturnal. It is harder to notice visual cues and gestures from allies/kin. It is harder to navigate, both due to difficulty seeing distant landmarks and nearby obstructions, so you are more likely to get lost and/or injured. It is colder so your body has to spend more calories to keep you warm.
There are adaptations that can improve nocturnal capabilities, but these typically come with tradeoffs that make diurnal life harder. Evolution is a series of many baby steps - either you need to adapt to not sleeping while you're still at a disadvantage at night, or you need to adapt to being awake at night while you still need to sleep. Neither path seems like it would have been advantageous to our ancestors.
I remember, in theory you could do sleeping for 15 minutes 6 times in 24 hours.
Or perhaps to keep us quiet and immobile, and harder to locate and eat ?
True brains, after 16hrs of actual work, need to hallucinate strongly for 8 hours or so, in order to continue their high level contributions to society.
Taking this as a jumping off point for a way of thinking about those 'services'. It seems remarkable to me that we can initiate the attempt to think of an elephant, and then get there in one shot. We don't sort through, say, rhinos, hippos, cars, trucks. We don't seem to have to rummage.
Of course when it comes to things on the edge of our memory or the edge of our understanding, there's a lot of rummaging. But it could have been the case that everything was that way (perhaps it is that way for some animals), instead, there are some things to which we have nearly automatic, seemingly instant recall.
There's hope. If the carbon chauvinists can be prevented from messing things up, AI is on track to provide something with a better SLA, which will finally allow us to decommission and junk those troublesome legacy systems without disrupting the business.
At all times, every single one of the billions of participants acts like a bureaucrat, delaying response until it's unavoidable and then resting afterwards at least half the time. If only we could cut through the bureaucracy!
Neuronal activities:
- Action potential initiation: 0.2-0.5ms
- Action potential duration: ~1-2ms
- Relative refractory period: ~2-4ms
- Total cycle time until fully ready: ~5-7ms
But at least there’s (usually) some exciting shows on while you are waiting!
Really poor design.
I picked up a simple smart watch that tracks sleep (one of the Garmins, as they are one of the few that protect privacy and don't need to connect to the Internet). I slowly and methodically improved my sleep, and I feel like a different person.
I have noticed that if I turn my blue light filter on my screens off, that has a huge impact. Working long days has a huge impact. I take a hell of a lot of magnesium. I need ~20 minutes of outdoor walking a day and I need to eat dinner before 4pm. Lots of other small things that have an impact that I'm probably forgetting.
How many of us are just chronically tired?
That is a tough question. Activity, it seems, has a habit of begetting activity. Such that the answer may not be, "you need better sleeping habits," but it could be more that, "you need better activity habits."
Noticing things is also a dangerous place to be in. A lot of what your body does while asleep is based on expectations as much as it is anything else. Learned expectations, to be specific. Most people know the "you wake up before the alarm goes off" idea. That is strong enough that it will work for changes in the alarm time.
What does that mean? It may be that your body learned a cue to start something for your sleep. So, for you, you now need to turn on your blue light filter; even if that may, in fact, not be actively doing anything.
Probably a lot of us, especially parents of small children.
I've also been struggling with sleep for the past five or six years, waking up in the middle of the night feeling strangely wired up. With a lot of trial and error I've been improving the quality of my sleep.
Three years ago I went to a sleep clinic because I noticed symptoms of sleep apnea and they were able to confirm it and prescribe a CPAP machine, for which I am grateful, but the overall experience was disappointing. When I explained during the follow up that I still was waking up at night feeling stressed they brushed me off and suggested some herbal remedy. It turns out that the pressure they had prescribed me was laughably off, which I only learned through trial and error for a period of two years until I found what works for me -- almost twice what they prescribed.
You mention some factors that I've also noticed having a big impact, like stress/work, walking outdoors (1hr minimum for me), stretching, foam rolling, early dinners, and only drinking one cup of coffee first thing in the morning. Another one that seems to have a weirdly strong impact is what I eat for dinner, with legumes/beans being by far the most beneficial -- maybe something to do with blood glucose during the night?
Doctors will often recommend exercise, but I find that these days even moderately strenuous exercise like riding a bicycle destroys my sleep quality for several days. There's something about it that appears to be too physiologically stressing, even though ten years ago I was a happy as a regular gymgoer.
this is surprising! Not that this would be easy to just do, but have you ever leaned into it for a while (like a month) and seen if that persists? I'm obviously not a doctor or anything -- I just wondered in reading that whether it may possibly be a change shock that would subside after a brief period at a higher activity level, resulting in the best of both worlds.
I had something similar like this. I think I was able to fix it. The theory is that your sleep is still poor, even though you sleep through the night. This is causing high cortisol levels during day time and higher resting heart rate. This is elevated further after doing moderate exrercise and takes a long time to get back to normal as your sleep isnt adequate. If your heart rate doesnt go down enough, then your sleep quality gets destroyed.
The solution, for me and I am guessing for you, is this: Stop the cycleing. First fix sleep. Track it using Wellue O2 Ring. If the scores are not good, the reconfigure CPAP - use sleepapnea reddit for inputs. Once sleep is sorted as per O2 Ring, then it might take a few months for you to recover. After that you can restart moderate exercise and things should be fine.
It did get better when I stopped cycling, as much as I loved it. I'm now walking instead and feeling much better. I intend to increase volume over time and once my VO2Max is back to my baseline then I may introduce cycling with an eye on going easy and eating enough before/during/after exercise.
Thanks for the advice, it is good to hear that it worked on other people.
I have thought that the blue light filter doesn't do so much, with a caveat. The laptop screen is much less bright, so it doesn't bother me. It seems like the blue light of a desk screen has a bigger effect. But I also think it is the brain activity of stimulus seeking on the screen itself that has a big effect on sleep. It's better to turn of screens entirely to wind down, or do something that actually helps you wind down for sleep.
Everyone's obviously different and your mileage may vary but at the end of the day you can drastically feel different by heavily modifying your diet and pushing past hunger 1 time/day.
I'm in the midst of a reflux episode so this is definitely something, but 4-5 hours between final meal and bed is a lot of time. Regardless, glad you found something that works and thanks for sharing.
Alertness is also partly a function of resting metabolic rate, which is higher for those who exercise and/or have more muscle tissue.
Just to confirm, because this is a surprising result: disabling the blue light filter on your screens improves your sleep?
Thanks to the tracker, I was able to determine that on nights I have just awful sleep, it correlated with my exercise days. Turns out that taking a pre-workout loaded with caffeine at 4PM is a terrible idea because caffeine can have a nearly 12 hour half-life. Oops. Ditched the pre-workout and my sleep improved significantly. No more insomnia and less time waking up in the middle of the night. I still have issues with REM at least half of the week unfortunately.
I finally got referred to a sleep study a few months ago and although it was for existing sleep apnea - which I've already been treating successfully with cpap/autopap for decades - it confirmed that I am indeed not going into REM. So it's not dementia (at least not yet), it's lack of REM sleep. It also revealed that my body is moving a lot during sleep, not good. And for the cherry on top, I recently started exhibiting behavior of REM sleep disorder where I am smacking/punching myself and my partner and yelling out in the middle of the night. Definitely not a good sign, but at least now we know sleep issues are at the heart of it. That Fitbit sleep tracking turned out to be very valuable after all.
N3, also known as deep sleep is when the glymphatic system flushes toxins from the brain, consolidates memories, increases HGH secretion, along with other hormonal changes, primes the immune system, drives parasympathetic response, etc etc.
REM sleep is related to emotional processing, and some memory, but I also recently heard a theory that REM may also be necessary to prevent the elasticity of the brain from over-writing the visual system with other inputs, which was an interesting theory, as sight is the only sense which is turned off during sleep.
How do you do that? The smartwatch may give some info, but what do you do with it that allows falling and staying asleep, while all kinds of random variables may affect the metrics?
I was routinely waking up in the middle of the night and unable to fall back asleep even on days I did not drink. Now I fall back asleep instantly.
I tried eliminating caffeine and practicing mindfulness before cutting out alcohol. I only stopped to be healthier, was pleasantly surprised when all my sleep issues went away. Have resumed my caffeine intake without any problems.
The glymphatic nervous system, like any great scientific theory, unites disparate findings under a common mechanism. Not getting enough sleep is akin to not running your dishwasher or washing machine long enough, the gunk accumulates.
And for all the parents out there, pediatric recommendation is 10-12 hours a night for kids 6-12 years old and 8-10 hours a night for kids 13-18 years old.
I had gotten prescribed some Zopiclone which is similar to Zolpidem as found in Ambien. Zopiclone makes me feel like I have a brain injury the day after. Sometimes after the first night, always after the second night if I find I need to take it two nights in a row. It’s frightening.
I came across a paper: ”Pharmacokinetic and Pharmacodynamic Interactions Between Zolpidem and Caffeine”
https://www.researchgate.net/profile/Roberta-Cysneiros/publi...
Based on my understanding of the results that a significant dose of caffeine counteracts “some but not all” of Zolpidem’s effects on cognition—and the two Z-drugs being similar—I tried drinking a tiny little bit of coffee with the tiny little bit of Zopiclone. (I take 2-3mg; a whole tablet is 7.5mg.)
The result is that I am able to sleep and do not feel brain-damaged the day after, and the effect also seems to be that the failure rhythm of stress-related waking up at precisely 5:30 is broken. In other words, the combination seems to fix the problem.
I suspect that part of the reason might be that the caffeine counteracts the disruption of the norepinephrine oscillation you mention. (Thanks!!)
That just sounds like you think every sleeping-pill is scary, as that's true for literally all of them.
Sleeping pills are mostly effective together with other types of therapy to address the underlying causes, just like most "temporary solutions". They're supposed to be used as "We'll try to figure out what's wrong, but in the meantime, so you can feel relatively human, here is a temporary crutch", not as a long-term solution.
No direct link has been found to this, but eating carbs has always given me deeply vivid (and often exciting) dreams since I was little. Unfortunately, from these I wake up exhausted, which isn’t great for the day.
I’ll continue being careful, and especially stay mindful when life stress—like love or money—picks up. It’s good to be aware if anything is being masked or overlooked in the process.
Carbohydrate metabolism has histamine intimately involved in it; Histamine – as per its inflammatory role – is basically used by the body to open tissue to receive blood glucose.
As it happens, histamine is also a neurotransmitter! An excitatory alertness neurotransmitter!
Both these aspects have been extant as scientific knowledge on record for some significant time, but are only really becoming known-known as of recent.
I have ADHD. I take lisdexamfetamine. Upon starting medication at 39.5 years of age, I quickly noticed that I had to be really careful with coffee, and especially to not at all touch any sweet foods or desserts around evening or so. Or I would wake up at 5:30 AM. (Exactly and precisely 5:30. Reliably. It’s sort of fascinating.)
As it turns out, amphetamine releases histamine! And! Caffeine inhibits the enzymatic breakdown of histamine! And sugar causes histamine to be released.
Meanwhile, older drugs that are less distressing aren't used any more because "We don't use it any more". -Dr: If I ask about Librium.
I don't think anyone here was making that assertion. As far as there is a broad, common experience, it is Dr's who won't consider older meds, even if they come with less baggage than their newer counterparts.
Took me about 2 years after the military to get back to "normal"
I do miss the Provigil, though... that stuff made able to focus so well.
i had an addiction but didnt abuse it. it got to the point that i craved ambien during the day for reasons i can't even explain. i just inexplicably wanted to take it. i wasnt even taking full pills of the usual dose, i usually cut them in half.
it took me a long time to learn to put my phone away before taking it. i would text people i was causally dating overly romantic and loving things and have zero memory of it. thankfully whenever it happened the people involved always just thought it was funny, and i did have the awareness to preface those texts with "maybe its just bc i took some ambien". After a few dates with someone i warned them i take ambien and might text them something stupid but loving, so they were well prepared
https://hn.algolia.com/?query=magnesium%20sleep&type=comment
For the past 5 years we've been developing phase-targeted auditory stimulation to increase slow-wave activity, which has been shown to have a positive response in amyloid response, as well as memory, and a bunch of other biomarkers.
https://pubmed.ncbi.nlm.nih.gov/38163288/
I link to more research on our website for anyone interested in the space - https://affectablesleep.com/research
And to further qualify the conclusion, the research was done in mice so it's premature to say whether or not human brains operate identically. (Mammalian anatomy between species is often similar, but just as often is found to be different in unexpected ways.)
- if you rhythmically give mice norepinephrine while they're awake, can you create the same movement in cerebrospinal fluid? Would mice go to sleep later following such an intervention?
- could you directly just pump cerebrospinal fluid faster? If you were willing to have a mechanical device surgically installed, could you have a rapid, extra-refreshing sleep at the press of a button?
- if the efficacy of washing is partly due to the contents of cerebrospinal fluid, could you look at what's being "washed out" and add stuff to the cerebrospinal fluid that makes those things more soluble?
In some far hypothetical future human device, I think even if amplifying a "washing" function doesn't replace sleep it could still be helpful ... but outweighing the risks involved in the intervention (attaching a person to a pump?) would be a high bar. But if decades from now you were already going to put in a neuralink v20, perhaps it would seem reasonable.
That's ... controversial, a few years ago fraud allegations surfaced [1].
[1] https://www.alzheimers.org.uk/for-researchers/explaining-amy...
> Studies from Nedergaard’s group and others suggest vigorous glymphatic clearance is beneficial: Circulation falters in Alzheimer’s disease and other neurodegenerative illnesses.
That circulation falters in Alzheimer's does not suggest anything re the benefit of circulation. Science's science journalism is usually SOTA, this is not.
- https://gpsych.bmj.com/content/37/3/e101641
Certainly seems worth investigating but I wouldn't call it a crazy success yet.
I detest this kind of medical research. It's horrific barbarity.
If the output is important enough for this kind of activity to take place, then it's important enough for humans to volunteer to be the subjects. If nobody volunteers then it isn't that important after all. Leave other species out of it.