There are many audiences who want you to go deep, but are not capable of a necessary level of understanding. In fact, these audiences are the ones who become what the author claims are imitators; pretending to understand when they do not.
Experts are experts not because they’re teachers; they’re experts because they’re experienced and are executionally excellent.
While certainly there are those who struggle to connect with layman audiences, especially in academia, a complete inability to communicate the challenges and successes of a technology, technique, or theory to laymen is a huge demerit against a claim of expertise.
There's a huge difference between walking through technical and theoretical mechanical specifics, and being able to communicate about a subject at different levels of abstraction. The greatest experts can often walk with a laymen right down to and rub against the jargon-filled specifics in a way that leaves no doubt they're able to step over that line without problem.
This is important because we don't perform ideation, architecture, or structural problem solving in the terms of white papers and technical specifications. If an expert does not have an abstract internal narrative with which to navigate the mechanics of the subject, they are likely not an expert at all.
If you're having a conversation with someone who asks more than surface-level questions, but hasnt more than surface-level knowledge, the answers can require many years of study -- or, at least, hours of conversation to get anywhere. This situation is incredibly frustrating if you're an expert, often because these question-askers aren't at all willing to listen to hours of you explaining many things to them.
Consider a person who says, "I heard rust was better than python because its safe" -- where would one even begin? Suppose now you're dealing with a person who knows enough to make the claim, but isnt patient enough to have memory safety, garbage collection, interpreters, etc. explained
This is often the situation experts are in. Indeed, it's a majority of hackernews threads.
"Rust is better for safety than C because it makes it harder or impossible to make certain kinds of mistakes that are easy to make than C."
"What kind of mistakes?"
"There are times when you need to acquire some shared resource. Let's say memory. In Rust, it's easy to ensure you return that memory when you're done with it. In C, it's easy to forget to return that memory when you're done with it. This is called a memory leak, and you might find that eventually there's not enough memory for other programs to run."
"What's memory?"
"Think of a program as a recipe you're performing. Memory then is like working space. Like your counter space in your kitchen. If you run out of counter space, you don't have room to chop up more vegetables or temporarily store your cooked meat or what-have-you. Maybe you ran out of counter space because you left last night's dirty dishes on it. Rust automatically puts the dirty dishes in the dishwasher. In C, you have to remember to do that yourself, and sometimes you forget to."
And so on.
There's an easy explanation as long as you're not expecting them to perform detailed analysis. And I'm not.
edit: eventually you'll get to your limit of simplicity, things like "what is fire?" and maybe you need to say "I don't know, let's find out together!" We all have limits to our expertise, after all.
The person claiming Rust is safer than python is so whoely confused this 2min tap-dance isnt good enough.
When you're dealing with people with a-bit-more-than-surface understandings, the main problem is their head is full of misunderstandings. It is these that can take hours, or years, to undo.
To suppose otherwise is to suppose that all questions can be answered in a cute couple of setences. This is, indeed, the opposite lesson of expertise.
And it's always about explaining to the target person's level of understanding. A layman isn't going to know the difference between languages and why pick one vs. another. That's a long digression. Explaining why to use Rust to a Python expert is a very different discussion.
An expert can switch between explanations as required for the audience. Non-experts, in my experience, cannot tailor the explanation for the audience. They typically bottom out at a level that leaves them unable to meet the questioner in the middle.
Hell, I've been the non-expert, especially when it comes to something like why a particular material was acting like a diode instead of not passing the signal or passing the signal intact. I can regurgitate that the half-rectified signal from the aggressor is bleeding into the signal of interest, but I can't tell you why. That's when I'll tell you to talk to the EE team for specifics on why, as well as what they plan to do to fix it.
I chose this example as I assumed it would be clear and non-controversial enough to make my point, ie., I assumed most HN readers would appreciate that a person claiming Rust's memory safety model made it safter than an interpreted language would be a person clearly deeply confused about the meaning of any of these terms.
If they don't know anything specific, then it's pretty safe to say "Well, there are some differences in the languages, and you might choose one over the other for performance or ecosystem reasons, but they're both pretty safe overall. You probably don't need to worry too much about it, TBH."
In general though, GC means automatic memory management which is effectively a kind of dynamic memory safety.
You may not understand their level, and may be unable to negotiate it due to vocabular barrier (people invent non-standard vocab all the time). Your oversimplifications may bear unintended consequences, etc. You must take that into account and create a plan to make it clear.
And when you have that skill without any expertise, they call you politician. Only a politician may answer “easy” on a hard question and tell something unrelated for few paragraphs cause he doesn’t know anything on the original one.
This isn't actually bad in all cases though. It's like learning about science in primary school. You're gonna get some things simplified in a way that might be considered incorrect by those who are experts, but without some base layer of understanding to bootstrap from, you won't be able to become an expert.
Meanwhile, the expert who got everything technically correct gets in an argument over irrelevant trivialities.
on the other hand, python is more productive than rust, so if you're time-constrained, sometimes using python will get you a higher-quality product, for example because it's better tested or has a less error-prone or more forgiving user interface. both python and rust fail only at run time when they detect index errors, and users can make dangerous errors with any potentially dangerous program
on the gripping hand, rust runs enormously faster than python, and sometimes optimization trades off against testing time and ui iteration time in the same way that writing more code does, and while writing python is faster, reading it often isn't, so there may be some size of program above which python has no advantages
so i think it's better to say that the question is based on an oversimplification rather than an error. to explain this to a layman i would probably tell some stories of programmers working late nights and delivering unusable guis
Every bytecode instruction is implicitly wrapped in a lock.
Of course, you could still get semantic errors by locking the wrong things at the wrong time; but you can do that in Rust as well.
A “data race” is simultaneous access of memory from multiple threads, where at least one access is a write. It’s undefined behaviour in modern compilers (including LLVM therefore Rust).
Rust prevents that (except unsafe), as does Python.
All “semantic” concurrency bugs that can be written in Python can also be written in Rust.
another definition of a data race condition is that it's a case where different interleavings of thread accesses to shared data produce different results. see https://en.m.wikipedia.org/wiki/Race_condition for more details and background on the concept
such bugs can be written in rust with unsafe, but not without it
hmm, thinking about it more, i wonder if what i'm saying makes sense
Not to mention, Rust chose not to do anything about the memory fence problem. You're probably mostly fine on x86, but acquire/release semantics and all that nonsense still apply when writing concurrent algorithms in Rust (you'll often get lucky if you have something like a queue/channel and use message passing, since implementations of those often have a few fences internally, probably the fences you need for semantic correctness in your own algorithms). You're a lot more likely to see that sort of problem crop up in embedded work.
[0] That's obviously a bad idea, but more intricate concurrent algorithms are tricky to write in Rust for the same reason they're tricky to write in other languages. You get a bit of help from the compiler which you wouldn't really in Python if you design things to leverage the type system and prove the invariants you need to uphold, but the compiler doesn't know about any of those invariants aside from data races.
- Soundness of their semantics at a conceptual level
- Correct implementation of the semantics in rustc (in Rust)
- Correct translation of those semantics to LLVM IR targeting OS syscalls in rustc (in Rust)
- Correct translation of LLVM IR to native code in LLVM (written in C++)
- Correct implementation of OS syscalls (typically in C)
Python's safety guarantees typically depend on:
- Soundness of their semantics at a conceptual level
- Correct implementation of the semantics in CPython (in C)
- Correct translation of C to LLVM IR targeting OS syscalls in Clang (in C++)
- Correct translation of LLVM IR to native code in LLVM (written in C++)
- Correct implementation of OS syscalls (typically in C)
Both are typically contingent on correct implementation of a huge swath of C/C++ and Rust code. Even if the entire stack were written in Rust, it still wouldn't be sufficient to guarantee memory safety, since bugs anywhere in the stack could introduce memory unsafety into compiled Rust code.
Rust's guarantees come entirely at the first two layers of the stack: If the semantics are sound, and the semantics are implemented correctly in rustc, then the generated LLVM IR (treated abstractly) has memory safety. Python has similar guarantees: if its semantics are sound, and the semantics are implemented correctly in CPython compiler/interpreter, then the resulting execution of the interpreter has memory safety.
This is a false explanation though - the problem is that memory is returned before you're done with it.
https://doc.rust-lang.org/book/ch15-06-reference-cycles.html
This is extra-wrong as python has a garbage collector to break reference cycles. If giving a simplified explanation to a layman, it's important that if they repeat the explanation to another expert they don't hear "well that's wrong".
Room full of buckets. And each of those has 8 light switches in it.
And on modern computers, the janitor keeps the buckets he's used most recently next to his desk.
Funny you ask, this was the most frustrating question in my childhood because no one could give a satisfactory answer of what exactly is the visible part of fire or whether it even exists as a body.
https://www.stonybrook.edu/commcms/alda-center/thelink/posts...
Fire is only possible in an atmosphere with extraordinarily high accumulations of oxygen on planets with an atmosphere, etc. The only known process to bring this about, as far as I know, is life.
Generically, lighting is the 'original' fire on earth. But I take it the commentor was thinking more in their school days of the sustained sort of burning commonly called 'fire'.
If you think they're the same, consider why we need explanations at all if a description-by-analogy does the same job.
Like, I never got further than “computers can only execute and understand these few specific commands” before the layman party gave up.
It is absolutely the same with plenty of mathematical topics (like, just the field of abstract algebra is absolutely mind boggling for almost everyone), physics, engineering, etc. Sure, one might bring up Feynman, but physics does have several topics that are easy to grasp - it’s literally about the world around us, and while a layman may not understand anything about the complex calculations about, say, an internal combustion engine, they will find it accessible due to the common ground.
"Because it turns out that people aren't using the language too much yet, and it makes some things that are easy to do in C, hard to do in order to keep the language safe. It's all tradeoffs."
At some point the person asking is satisfied and that tangent ends. This is how conversation works.
You aren't trying to teach a layman how to write production-ready Rust code, and they aren't interested in learning how.
Hard and easy here are extremely deep and complex subject matters. Is goto hard or easy, for example?
The person is going to give up (or you are,) because you are not going to be able to just have this conversation for any meaningful length of time. This is a (bad) fantasy, and one that books sometimes attempt (that I hate) and some people fantasise this way.
They are not going to Ask The Next Question because if they could do that, it would come from a position of already understanding in a sort of anachronistic way. (In that they already have your answer, yet still want it.)
They lack intuition for all the new concepts, they have a bunch of false friends (to borrow from linguistics, in that they think they understand some of the concepts - but they are different concepts with the same or simliar names).
If you are a good teacher, then I am sure you will be very successful in explaining, but it's just not happening quickly nor with just anyone.
It is the light of other days.
Feynman explains it here: https://youtu.be/N1pIYI5JQLE?si=APotFnCuOVKN0wc8
Imagine, as a simple model of expertise, three different levels of knowledge in a subject: beginner, intermediate, and advanced. In my experience, an expert can always explain their knowledge to the next level down: if you've got someone at the advanced level in a room of intermediates, then if the person at the advanced level should be able to explain what's going on to their intermediate peers. If they can't, I would be deeply sceptical that they know their stuff.
But beyond that - someone at the advanced level in a room full of beginners, for example - communication gets harder, and the expert usually needs to be skilled in teaching in addition to their specialist subject in order to effectively explain what they're on about.
So in your example, the expert is dealing with someone who only knows the basics, and so will need to do a lot of explaining to get them up to speed. But if instead the expert was asked something like "How does the use of ADTs in Rust change how you would model data in comparison to Python?", then the person they're dealing with probably knows enough that the expert only needs to explain the relevant specifics to them.
So in summary: yes, teaching someone who knows relatively little compared to you requires a lot of specific teaching skills, even as an expert. But if you really are an expert, you should be able to explain your expertise to peers who just don't have the specific knowledge you have.
There are some topics where analogies can sidestep some of this difficulty in teaching by basically keeping the shape of the dependency graph, and replacing it with one that happens in ordinary life. But I don’t think that every expert topic can have such a homology, fundamentally so. Complexity is unending, while “problems common in ordinary life” has limited complexity. That math paper that 10 people understand on Earth will not be readily accessible to Aunt Mary not even by the smartest person on Earth.
>“You’re talking about an infinite regress”, I said, when I had finished the glass.
>“Not infinite. Architects. Teachers. Teachers of teachers, but the art of teaching teaching is much the same as the art of teaching. Three levels is enough. Though the levels have to mix. The teacher who trains the next architect must be a master both of teaching and of architecture. I will spare you the math, but one needs a series of teachers at different points on the teaching-skill/architecture-skill tradeoff-curve. One will be a master teacher who has devoted decades to learning the textbook-writing skill, and who can write a brilliant Introduction To Architecture textbook that makes the first ten years of architecture ability seem perfectly natural and easy to master. Another will be a mediocre teacher who knows enough advanced architecture to write a passable textbook on the subject. Still another will do nothing but study pure Teaching itself, in the hopes that he can one day pass on this knowledge to others who will use it to write architecture textbooks. In practice we are limited to a few strategic points on the tradeoff curve.”
[1]:https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/
This means that the asker is impatient, not the expert. An expert can explain everything from 0, and might even enjoy it.
Can he explain anything to me, given such a language boundary? Even though he knows stuff from 0, that doesn’t translate over to ability to explain.
Rust is safer than Python in the way same way as having a backup sensor in your car versus not. Sure, you won’t need most of the time but it may catch you that one time that you almost backed into a wall because you were still groggy after a night out.
You got their attention and they understand your expert PoV. If they’re still curious and want to know more, you go deeper.
Sqeaky: rust and python are both good programming languages rust is good for high performance, safety oriented, systems development. While python is good for hooking two systems together or processing a little data right in front of me. Why do you ask what are your goals, maybe some context would help me understand?
Q: I'm a new developer and I want to build a web page I wasn't sure what to use?
Sqeaky: well let's get into what programming you know and let's discuss some Libraries Python and rust both have options!
Q: ...
And the conversation would carry on from there based on the context the asker provided, because contextualizing conversations, questions, and data around a topic is part of having expertise.
"Rust is like the room of Steve, you know, the one with OCD. Separate drawers for underwear, separate for socks, all subdivided by days of the week. And nothing, nothing is. ever. out. of. order. Sounds like a lot of work up front, but you know what Steve never is? Late. Or in dirty or mixed up outfit. Or unsure whether something is lost or who was it lent to."
"Now, some programming is more like writing experimental music; other is more like double-entry bookkeeping for a company. The flexibility Python gives you is great for the former, but you'd likely prefer something more strict for the latter."
Experts deeply understand their subject, and tailor answers to meet the audience where they're at. If you express genuine interest in someone's passion, they'll be ecstatic. That's what the OP is talking about. It's not about how experts (or anyone) interacts with belligerent and dismissive interlocutors. And observing teaching moments is just one aspect of a larger smell test for detecting imposters. I think it's a good heuristic within that scope!
I would say even the Feynman Lectures for undergrads aren't simple, but that's arguable, so let's talk about Feynman diagrams, a supposedly intuitive invention of Feynman. Pretty little line drawings of particle interactions are indeed useful for surface level explanation, but physics is quantitative (that's how we verify anything) and staring at line drawings gets you nowhere in that regard. So the layperson might ask, "how do physicists calculate anything with Feynman diagrams?" at which point Feynman himself would probably be speechless. The path integral calculations for the simplest case in QED is like a page long, which is complete gobbledygook for the mathematically unprepared (>99.5% of the general population), and even for the mathematically prepared, the calculations mean nothing without tons of theory buildup. I have on the shelf right next to me the classic tomes on quantum field theory: >800pp Peskin & Schroeder, >800pp Schwartz, three volumes of Weinberg, all filled with dense mathematics. You need to study at least about a quarter of those to properly understand the aforementioned topic, no to mention all the prereqs: various formulations of classical mechanics, some classical electrodynamics, quantum mechanics, special relativity. There's no shortcut, anyone claiming to have an intuitive understanding of Feynman diagrams (the real deal, not just the line drawings) by reading some "simple" explanations is lying, regardless of whether they have genuine interests. (Btw, the ones with genuine interests are sometimes the worst: the "my theories are correct, I'm just bad at math!" alt scientists who love to email their papers to our entire department.)
In conclusion, there are fields where there's really no way to explain things beyond the extreme surface level in simple terms, and ironically Feynman's own is one of those.
- Experts that tailor their answers to meet the audience are experts, but not only experts, they also have the luck of finding good analogues for parts of a system or topic, and a skill, or luck, of story telling and structuring teaching.
- Individuals who use analogues and simplifications to describe a system or topic are not necessarily experts, they can also be lucky or skilled imitators, or just teachers.
- Experts who are experts by the definition of having a deep understanding of the subject, but who are incapable or unwilling to simplify and/or structure the story well (in your subjective opinion), are still experts, but unless you also become an expert on the topic, it will be hard for you, or anyone else, to trust their expertise.
True experts can understand from the layman's perspective and tailor the response to that level of understanding.
I've been working with experts on highly technical topics most of my career. They can explain the big picture to a layman just fine, but if the layman asks something within the actual scope of their expertise, it takes a long time to get there.
The Feynman paradox:
> if you can't explain it to a six-year-old you don't understand it yourself
but also
> If I could explain it to the average person, I wouldn't have been worth the Nobel Prize
The former probably apocryphal.
I’ve worked with plenty of experts who are also impatient, short and condescending with anyone who doesn’t grasp things straight away. That doesn’t mean they’re an imitator, it just means they’re an arsehole.
People aren’t binary. An expert isn’t automatically a good teacher. And, in fact, some imitators are actually better teachers. It’s almost as if teaching and “doing” are different skills ;)
Hard disagree. A person can be an internationally recognized expert working on e.g. a novel mathematical theory useful for cryptography, without being able to explain any of it to anyone without a university-level math education.
I posit you're combining two orthogonal properties, expert knowledge and being a good communicator, into one axis because that's a useful combo, or the make-up of an expert that you'd like to know and/or hire.
No matter how well I understand something, my ability to relay that information effectively to another individual will always be another matter entirely. It's true that I can't explain something I don't understand, but it doesn't logically follow that I only truly understand something if I can explain it.
It is an entirely different skill to educate someone with your expertise when you don't get their feedback (tv, radio, youtube) or when they don't want to learn (classroom teacher, office politics around bean counters). Those are two kinds of education and they are important skills all on their own but are in no way tied to other kinds of technical expertise.
If we were both working on a project that needed your and my expertise. If I understood that you had different sensibilities about social norms. If I respected those while asking questions about parts of your explanation I didn't understand. Would you be able to explain a detailed topic to me?
Even if the listener knows little to nothing about the subject, being eager to understand and asking questions while respecting my sensibilities, however realistic that expectation may be, makes all the difference for me.
I know of no easy answer but I have met people like this and seen them succeed. I hope you're able to successfuly wrangle this extra complexity that the neurotypical will foist upon you and not even understand that they've done it.
How would anyone know they are an expert if they can’t make themselves understood?
Unfortunately I think it’s naive. I saw people ready to say BS things with a touch of acronyms getting more traction and at higher levels compared to some experts. It’s true experts can walk a layman into deeper subjects but so people used to half BS and know how to convince people…
I remember getting a technical interview with the CTO of a startup. He knew all the vocabulary of DDD, Event Sourcing and the like and was talking length about it but he couldn’t explain even what he was saying like what is an aggregate, etc… I’m sure he was convincing to other officers and the tech team but not to me.
Because of that I feel like almost only experts can detect non experts.
The world is full of people like that. Consultants are sometimes literally all they do.
Like, yeah I'm able to explain our postgres HA setup + DC-redundancy at fairly detailed technical level. But then a member of the board asks "Yeah but what does it cost and what benefit does it bring the customers and why?"
Finding a satisfying and comprehensive answer to that question takes a whole new perspective and made me understand a few things more. And question a few more.
In my career I have many times been asked to understand a system and to evaluate it and come back with an explanation to the non-technical. The first few times I made no accommodations for cost or support planning. And when asked how much such a thing would cost I inevitably gave wrong answers based on sticker prices or cost of hardware or something similarly trivial. Today I'm always evaluating systems I work in in terms of technician and expert hours in addition to the cost of the things we purchased for them to work on.
If somehow I were surprised today and asked to give a price of standing up such a system, I would know enough to say I need to do some review and formal planning. I would staunchly, but politely, refuse to give up any numbers and say that I can come back with a detailed plan with itemized costs after I actually write down all my notes because even after having worked and software for 20 or more years there are just too many little details that can add a million dollars to a project. After having bought such time I would write down all my thoughts and gather all my notes and look around for things I missed. Then provide such costs and benefits to the bean counters in the only language they all speak dollars, and I would be sure not to put a single number for any field they would all be ranges possible minimums and maximums. And I've been asked why they are ranges in the past, and those help to articulate uncertainty. That is another whole discussion but people who don't work with the technology often don't understand the uncertainty that come from these very certain, digital, and predictable machines.
Which is not to say that shallow knowledge (whether AI or human) is a bad thing. I'm very happy to be helped out by ChatGPT in places where I'd be in the shallow knowledge ('imitator') camp. Still, the limitations of current AI seem quite similar to the 'imitators' described here (noting that plenty of humans aren't above making stuff up either).
Thinking up how to use that.. How to use the concept of explain something to me, and me not understanding to see how deeply they get it.
So that kind of confuses everyone including the experts.
Network growth, trade growth, capital growth, info growth all are happening faster than expert growth.
To make progress, you have to keep the optimism of a neophyte.
When going about the unreasonable, I tend to ask myself 3 questions:
1. Has this area recently been unlocked by progress elsewhere? e.g. ML with cheap GPUs
2. Have I built skills that make me uniquely suited to tackle this?
3. Is this problem relatively unsexy? Everyone wants to build a better javascript framework, no one wants to wear the pager.
I find the most promising areas to be unlocked when I can answer yes to all three things.
Funny, the founders of a lot of successful startups usually tell an anecdote similar to this. They will often undertake a challenge, naively thinking that it’s easy only to find out how colossal actually is. Then this becomes their moat or differentiator.
Naiveté can be a factor which would amplify number of people trying.
I’m not sure who has the bias in this case.
The startup ? The parent comment ?
An expert would have a pretty decent idea how and what kind of pitfalls to prepare for, even if the specifics are largely undefined.
Otherwise your expert is simply a neophyte on this context, even if possibly with more transferrable experience in the more general sense.
And crucially, so many things that were supposed to be impossible...
As you practice a craft, you build up positive knowledge - what works. And you build up negative knowledge - what doesn't work. But you also build up humility; the more you learn, the more you realize you don't know.
Knowledge increases sub-linearly, assuming a modicum of curiosity, but humility is like a parabola. The "experts" of this article are those at the bottom of the humility parabola. They have quite a bit of positive knowledge, some negative knowledge, but they don't really know yet what they don't know.
_Many "experts" don't consider themselves experts._ They all too often say "Hmm, interesting question, I don't know, but...". They are defined by humility and curiosity.
Imagine the question "Why is the sky blue?". Somebody who has just finished an undergraduate degree in theoretical physics and happens to have learnt learnt about Rayleigh scattering will sound much more like an expert than somebody who says "Hmm, interesting question, I don't know, but..." and then spends fifteen minutes figuring it out on the spot.
Like the phenomenon of the newish driver: nobody seems more of an expert driver than somebody who passed their test three months ago. They have learnt all the rules, they think they know everything, they often don't have much curiosity, and they have yet to learn humility.
Of course, this often doesn't matter. Many people don't want an actual expert - they want somebody who sounds like an expert _to other people_. Oracle don't advertise to people who buy software. They advertise to the people who second guess the people who buy software.
To quote a wonderful ex-co-worker, "Most people need a generalist. But they want - and are willing to pay for - a specialist."
Basically, sometimes if you bounce your ignorance off a true expert you can see it reflected back in a positive light as they try to massage ideas into your perspective. Bullshitters aren't able to do this.
what a silly thing to say. High quality information is derived from accurate sources subjected to scientific rigor over time. The best people? Best at what? Character and competence arent the same.
Most of the time, a good judgement intermediate person is better I guess, and they might have more free time too
His strategy is to take a controlling stake in a struggling company with good fundamentals but poor management. Then he’ll “fix up” the small problems and return the company to profitability.
How exactly is that parasitic?
The basic problem is that people do not want to appreciate the social value of capital allocation, well because a side effect of it is often making millions or even billions of dollars for yourself. And yet not having this mechanism is precisely the reason why the USSR collapsed.
Imagine this from the perspective of the employees: years of bad management, facing down the barrel of unemployment in a shrinking industry. Suddenly Buffett turns up, fires the bad managers and tells you profits are up so here’s a Christmas bonus. Capitalism at work.
Because shallow dismissals of wealthy / successful people is all the rage. Surely none of these people can actually have any talent. They're all just lucky or irredeemably evil, right?
> CM: what if we had no manufacturing and our only businesses were hedge funds?]
Buffet is very aware of that thinking. Even if CM is Charles Munger (VP of Hathaway).
Keep in mind that the majority of the world's population grow up in places with little access to the information one needs to be exposed to to become an "expert".
So go easy on imitators. Help guide them to enlightenment.
Don't flip the bozo bit.
Ninety percent of the time when I interview on that subject the candidate ends up getting a "hard no" from me just based on the first ~10 minutes of the interview, but on a very rare occasion I run across someone who's actually an expert in the field. I'll know because we'll quickly blast past all of the "<subject> 101" questions in the first 5 or 6 minutes, and then I can quickly adapt to deep-dive into technical details, giving them what I know to be fairly novel problems in the domain and then seeing how they apply first principles to tackle them. The "interview" ends up looking more like a collaborative brainstorming session at that point. It's incredible when that happens, which sadly is only maybe once or twice a year.
But usually I end up giving an "Expertise" interview for whatever it is they claim expertise in, whether I myself possess expertise in that subject or not. For the past several months the most prominent subject on literally every single resume has been AI/ML. I certainly don't claim expertise in that field, although I did take a graduate course on computational machine learning theory at university. That gives me "just enough" of a handle to not come across as a completely incompetent interviewer, but it feels like a farce.
With all this AI/ML hype I feel like the "Expertise" interview just ends up being someone pretending to have expertise in AI/ML being interviewed by someone pretending to be able to assess expertise in AI/ML.
There are no "Experts" but only "Levels of Expertise"; and the only way to identify the correct level of someone is to be at a particular level yourself based on a objective and honest appraisal of your own knowledge in that domain. Both the perceiver and the perceived are factors to be considered. In common parlance the title "Expert" is bestowed on somebody (usually to the bemusement of that person) when a sizable group of the population acknowledges (not necessarily logically since we have Marketing/PR/Spin/Propaganda/etc. involved) him/her to be at a higher-rung in the knowledge-ladder then themselves in a particular domain. The caveat is that given the complexity/depth of any domain today the vast majority of the population are not to be trusted in their opinions. You can only trust somewhat the judgement of the "peers" of the "Expert" in that particular domain. Knowledge is gained only through a) Direct Perception, b) Logical Inference and c) Valid/Authentic Testimony and all play a part here.
Gaining knowledge isn’t particularly difficult. But expertise comes from actually applying that knowledge in the real world. The real world is messy and chaotic.
It’s also multidisciplinary. Acquired knowledge tends to be limited to the topic at hand. We find topics we like and learn more about them. But the rough edges where disparate disciplines meet is where expertise grows.
Think of all the money managers who borrow their talking points from Warren Buffett. They might sound like Buffett, but they don’t know how to invest the way Buffett does. They’re imitators. Charlie Munger once commented: “It’s very hard to tell the difference between a good money manager and someone who just has the patter down.”
The best ones will not take your money, or there is no easy way to invest (e.g. Renaissance Technologies) . the bad ones are practically begging for clients and spend lots of $ on advertising. Also, performance metrics...
Not always bad not to be an expert. And it's fairly rich to start with an example from asset management, which is an industry that habitually mistakes luck for expertise.
It is performing mimicry.
What happens when people try to "learn" from imitators.
I disagree. Successful imitators are highly skilled at misdirection and will be able to come up with good-sounding answers. Experts on the other hand might not have the best answers ready at hand. What the author is describing is simply an interview, and we all know interviewing is a skill on its own.
I can count on a single hand the number of ppl I considered experts in their field.
Titles are not given based on expertise but how many years of experience you have and how many ppl you know.
It takes experience to recognize most software is still garbage, but more importantly determine which parts of the garbage heap is useful.
The primary problem is the very definition of any unique terminology or product is distorted by the industry itself to fit a marketing niche.
After a few years people sound like they had a stroke, and bought a Turbo Encabulator:
Bad analogy. There are domains that admit experts e.g. numerical methods for partial differential equations. Investment management is not one of them. There is way too much luck involved and in Buffet like cases, size and reflexivity confound the matters more. Often, we look for experts in fields that do not admit experts and come away disappointed.
Uh, or they just don't have answers to things off hand.
A college professor who lectures every semester multiple times about something is very good at reproducing knowledge and fielding questions on that knowledge.
However, someone whose expertise is largely procedural will have difficulty fielding answers to "deep questions".
Depends on a lot of things. Preferably not for free.
(see "Ripley Underground" by Patricia Highsmith for more info...)
:)
Anyway, doesn't work on me, I'm not afraid of not being able to tell bullshitters from people with experience, and I'm also not afraid of listening to bullshitters.
As the Principia Discordia reminds us, "bullshit makes the flowers grow, and that is beautiful".
A real expert is extremely rare to find.
Ppl who think and present themselves as experts in the field is plently.
You will know you met a real expert when you talk to them long enough. Ive met few, the knowledge difference was a canyon the approach to solving problems was superb.
I try to keep contact with those ppl coz out of few hundred devs I met I considered only 3-5 a real expert.
many people understand the what
some understand the how
fewer understand the why
Or, put in a different way: many people understand the API; some understand the implementation, and fewer understand the constraints that shaped everything.
Are You An Expert? (2009)
but yes AGI will be here in 2 years... if only I was an expert and could know this for sure!!! :D
> Imitators don’t know the limits of their expertise. Experts know what they know, and also know what they don’t know. [...] Imitators can’t. They can’t tell when they’re crossing the boundary into things they don’t understand.
[0] - https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
The actual definition is what you said, while the popular definition is that the worst performers think they're experts.
Tenure allows lazy to be lazy, perhaps, but 'most' are not lazy by any reasonable standard. Tenure does allow tackling harder, more risky or non-normative research programs, which is probably a huge net positive
If you are not familiar with the area there is no way distinguish experts vs. imitators. In order words, imitators are not stupid: they are just not expert in the field.
That's indeed what the post's linked article on first principles covers. Yeah, the author's a bit high on Elon's Musk in that article, but the premise and conclusion seem pretty sound: it's hard to expand your knowledge of a topic if you're unwilling to acknowledge the limits of that knowledge.
On the other hand, that differentiation strategy is vulnerable to the "Calvin's Dad" gambit: an expert imitator tends to be good at coming up with bullshit on the fly to generate plausible yet inaccurate answers to those "why?" questions.
Where did the author claim to be an authority on the subject?
There ya go; I can be a dismissive cynical asshole too, attacking the mere form of your post without even spending a single word on the content.