If you want to learn category theory in a way that is more orthodox, a lot of people recommend Tom Leinster’s Basic Category Theory, which is free[1]. I’m going to be working through it soon, but the bit I’ve skimmed through looks really good if more “mathsy” than things like TFA. It also does a better job (imo) of justifying the existence of category theory as a field of study.<p>[1] <a href="https://arxiv.org/pdf/1612.09375" rel="nofollow">https://arxiv.org/pdf/1612.09375</a>
Disclaimer for the book, and for category theory in general: most books are optimized for people who already master mathematics at an undergraduate level. If you're not familiar with algebraic structures, linear algebra, or topology, be prepared to learn them along the way from different resources.<p>Category theory is also not that impressive unless you already understand some of the semantics it is trying to unify. In this regards, the book itself presents, for example, the initial property as trivial at first hand, unless you notice that it does not simply hold for arbitrary structures.
If someone does not want to check the mathematics line by line and prefers to give the article the benefit of the doubt, note that it also presents this JavaScript:<p>[1, 3, 2].sort((a, b) => {
if (a > b) {
return true<p><pre><code> } else {
return false
} </code></pre>
})<p>This is not a valid comparator. It returns bools where the API expects a negative, zero or positive result, on my Chrome instance it returns `[1, 3, 2]`. That is roughly the level of correctness of the mathematics in the article as well, which I'm trying to present in sibling comment: <a href="https://news.ycombinator.com/item?id=47814213">https://news.ycombinator.com/item?id=47814213</a>
I think it is pretty obvious that at the challenge with all abstract mathematics in general and the category theory in particular isnt the fact that people dont understand what a "linear order" is, but the fact it is so distant from daily routine that it seems completely pointless. It's like pouring water over pefectly smooth glass
>so distant from daily routine that it seems completely pointless<p>imo, this is a problem with how it's taught! Order theory is super useful in programming. The main challenge, beyond breaking past that barrier of perceived "pointlessness," is getting away from the totally ordered / "Comparator" view of the world. Preorders are powerful.<p>It gives us a different way to think about what correct means when we test. For example, state machine transitions can sometimes be viewed as a preorder. And if you can squeeze it into that shape, complicated tests can reduce down to asserting that <= holds. It usually takes a lot of thinking, because it IS far from the daily routine, but by the same rationale, forcing it into your daily routing makes it familiar. It let's you look at tests and go "oh, I bet that condition expression can be modeled as a preorder on [blah]"
Is there a "mind-blowing fact" about category theory? Like the first time I've heard that one can prove there is no analytical solution for a polynomial equation with a degree > 5 with <i>group theory</i>, it was mind-blowing. What's the counterpart of category theory?
A thing is its relationships. (Yoneda lemma.) Keep track of how an object connects to everything else, and you’ve recovered the object itself, up to isomorphism. It’s why mathematicians study things by probing them: a group by its actions, a space by the maps into it, a scheme in algebraic geometry defined as the rule for what maps into it look like. (You do need the full pattern of connections, not just a list — two different rings can have the same modules, for instance.) [0]<p>Writing a program and proving a theorem are the same act. (Curry–Howard–Lambek.) For well-behaved programs, every program is a proof of something and every proof is a program. The match is exact for simple typed languages and leaks a bit once you add general recursion (an infinite loop “proves” anything in Haskell), but the underlying identity is real. Lambek added the third leg: these are also morphisms in a category. [1]<p>Algebra and geometry are one thing wearing different costumes. (Stone duality and cousins.) A system of equations and the shape it cuts out aren’t related, they’re the same object seen from opposite sides. Grothendieck rebuilt algebraic geometry on this idea, with schemes (so you can do geometry on the integers themselves) and étale cohomology (topological invariants for shapes with no actual topology). His student Deligne used that machinery to settle the Weil conjectures in 1974. Wiles’s Fermat proof lives in the same world, though it leans on much more than the categorical foundations. [2]<p>[0] <a href="https://en.wikipedia.org/wiki/Yoneda_lemma" rel="nofollow">https://en.wikipedia.org/wiki/Yoneda_lemma</a><p>[1] <a href="https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" rel="nofollow">https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...</a><p>[2] <a href="https://en.wikipedia.org/wiki/Stone_duality" rel="nofollow">https://en.wikipedia.org/wiki/Stone_duality</a>
well, this is more applied and less straightforwardly categorical, but thinking along the lines of <i>solely</i> looking at compositional structure rather than all the properties of functions we usually take as semantic bedrock in functional programming (namely referential transparency) is how you start doing neat arrowized tricks like tracking state in the middle of a big hitherto-functional pipeline (for instance automata, functions which return a new state/function alongside a value, can be neatly woven into pipelines composed via arrow composition in a way they can't be in a pipeline composed via function composition)
<a href="https://en.wikipedia.org/wiki/Abstract_nonsense" rel="nofollow">https://en.wikipedia.org/wiki/Abstract_nonsense</a><p><a href="https://math.stackexchange.com/questions/823289/abstract-nonsense-proof" rel="nofollow">https://math.stackexchange.com/questions/823289/abstract-non...</a><p>Sometimes the proof in category theory is trivial but we have no lower dimension or concrete intuition as to why that is true. This whole state of affairs is called abstract nonsense.
Well, group theory is a special case of category theory. A group is a one object category where all morphisms are invertible. You do group theory long enough and it leads you to start thinking about groupoids and monoids and categories more generally as well.
One of the most striking things is that cartesian products of objects do not correspond to set-cartesian products. This to me was mind-blowing when studying schemes.
I think that CT is more akin to just a different language for mathematics than a solid set of axioms from which you can prove things. The most fact-y proof I've personally seen was that you can't extend the usual definition of functions in set theory to work with parametric polymorphism (not that just some constructions won't work, but that there isn't one at all).
Sure, category theory can't prove the unsolvability of the quintic. But did you know that a monad is really just a monoid object in the monoidal category of endofunctors on the category of types of your favorite language?
Just Yoneda Lemma. In fact it feels like the theory just restates Yoneda Lemma over and over in different ways.
You're more right than you'd think. The whole point of mathematics is precise thinking, yet the article is very inaccurate.<p>Nobody seems to care or notice. I'm watching in disbelief how nobody is pointing out the article is full of inaccuracies. See my sibling thread for a (very) incomplete list, which should disqualified this as a serious reading: <a href="https://news.ycombinator.com/item?id=47814213">https://news.ycombinator.com/item?id=47814213</a><p>My conclusion cannot be other than this ought to be useless for the general practitioner, since even wrong mathematics is appreciated the same as correct mathematics.
You say pretty obvious, but it took me 2 years during my PhD to be consciously aware of this. And once I did, I immediately knew I wanted to leave my field as soon as I would finish.
I once saw a man with a notebook and pencil drawing these kinds of diagrams, at the time I saw them as graph theory. I wasn't in an extrovert moment and missed my chance to ask. He seemed to be working recreationally on them. I'm wondering about puzzles that could be easily created using these theories / maths. You, practitioners, any suggestions?
> I once saw a man with a notebook and pencil drawing these kinds of diagrams, at the time I saw them as graph theory.<p>I have been engaged in some work on s-arc transitive graphs in algebraic graph theory. You'd be surprised how rarely I have to draw an actual graph. Most of the time my work involves reasoning about group actions, automorphisms, arc-stabilisers, etc.<p>For anyone curious what this looks like in practice, I have some brief notes here: <<a href="https://susam.net/26c.html#algebraic-graph-theory" rel="nofollow">https://susam.net/26c.html#algebraic-graph-theory</a>>. They do not cover the specific results on s-arc-transitivity I have been working on but they give a flavour of the area. A large part of graph theory proceeds without ever needing to draw specific graphs.
The author's writing style and overuse of parentheses is excruciating. True parenthetic material is rare, good technical writers use them sparely.
There is a way to frame category theory such that <i>it's all just arrows</i> -- by associating the identity arrow (which all objects have by definition) with the object itself. In a sense, the object is syntactic sugar.
This resource is a really clear breakdown of order relations; visualizing the structure like this makes the abstract concepts much more digestible
binary relations defining order are more nuanced than they seem; a linear order isn't just about ranking, it's about the structure of the relationships themselves.
studying category theory for my master's in 2015 showed me how orders influence everything from data structures to algorithms. foundational stuff.
this reminds me of Haskell’s type classes; they elegantly define order concepts through their own set of rules, capturing relationships in a clean way.
I love how math is like a new language, in a new country, of culture you are not exactly familiar with.<p>This article is like living there for few months. You see things, some of them you recognize as something similar to what you have at home, then you learn how the locals look at them and call them. And suddenly you can understand what somebody means when they say:<p>"Each distributive lattice is isomorphic to an inclusion order of its join-irreducible elements."<p>Having a charitable local (or expat with years there under their belt) that helps you grasp it because they know where you came from, just like the person who wrote this article, is such a treasure.
Unless there's some idiosyncratic meaning for the `=>`, the Antisymmetry one basically says `Orange -> Yellow => Yellow -/> Orange`. The diagram is not acurate. The prose is very imprecise. "It also means that no ties are permitted - either I am better than my grandmother at soccer or she is better at it than me." NO. Antisymmetry doesn't exclude `x = y`. Ties are permitted in the equality case. Antisymmetry for a non-strict order says that if both directions hold, the two elements must in fact be the same element. The author is describing strict comparison or total comparability intuition, not antisymmetry.
I don't think they are <i>completely</i> wrong - "=>" is just implication. A hidden assumption in their diagrams is that circles of different colours are assumed to be different elements.<p>A morphism from orange to yellow means "O <= Y". From this, antisymmetry (and the hidden assumption) implies that "Y not <= O".<p>Totality is just the other way around (all two distinct elements are comparable in one direction).
If this is meant to be an explainer, that can't be simply implicit. The text actually seems full of imprecise claims, such as:<p>"All diagrams that look something different than the said chain diagram represent partial orders"<p>"The different linear orders that make up the partial order are called chains"<p>The Birkhoff theorem statement, which is materially wrong. A finite distributive lattice is not isomorphic to "the inclusion order of its join-irreducible elements".
It really isn't a long enough section to get lost in.<p>The 'not accurate' diagram says that orange-less-than-yellow implies yellow-not-less-than-orange. Hard to find fault with.<p>> NO. Antisymmetry doesn't exclude `x = y`. Ties are permitted in the equality case. Antisymmetry for a non-strict order says that if both directions hold, the two elements must in fact be the same element. The author is describing strict comparison or total comparability intuition, not antisymmetry.<p>I like the article's "imprecise prose" better:<p><pre><code> You have x ≤ y and y ≤ x only if x = y</code></pre>
My comment is not long enough either to get lost in.<p>The prose "It also means that no ties are permitted - either I am better than my grandmother at soccer or she is better at it than me" is inaccurate for describing antisymmetry. In the same short section, you first state the correct condition:<p>You have x ≤ y and y ≤ x only if x = y<p>from which it doesn't follow that "It also means that no ties are permitted". The "no ties" idea belongs to a stronger notion such as a strict total order, not to antisymmetry.
The prose is correct.<p>You (presumably) aren't your grandmother, so we have x=/=y. Therefore by the biimplication, (x ≤ y and y ≤ x) is false i.e. either x ≤ y (I am better than my grandmother) or y ≤ x (my grandmother is better than me). The "neither" case is excluded by the law of totality.
> The "neither" case is excluded by the law of totality.<p>We literally said the same thing. It doesn't follow from antisymmetry.<p>My point is precisely that:<p>(x <= y /\ y <= x) -> x = y<p>does not entail<p>x <= y \/ y <= x<p>The second statement is totality/comparability, not antisymmetry.
The first 90% of this is standard set theory.<p>I'm unclear what the last 10% of 'category theory' gives us.
I think the last 10% is exactly where the useful part is, at least for programmers.<p>In a preorder seen as a category, there is at most one arrow between any two objects. So every diagram commutes and uniqueness is basically free. Then products and coproducts stop looking like magic diagrams and become something very familiar: greatest lower bounds and least upper bounds.<p>Small nit: preorders are thin categories, but posets are the skeletal thin categories. In a preorder you can have distinct a and b with both a <= b and b <= a, which means they are isomorphic, not literally the same. Quotienting by that equivalence gives you the poset.<p>The software angle is the part I find most useful. This kind of bugs shows up when we force a total order onto something that is only partially ordered, or only preordered. Dependency graphs, versions, permissions, type hierarchies, CRDT states, rule specificity, build steps. A lot of these don’t really want a comparator and a sort. Sometimes they want a quotient, a topological sort, a join, or just the honest answer that two things are not comparable.<p>That feels like the practical lesson here: category theory is not always adding abstraction. Sometimes it is just a good way to stop pretending two different structures are the same thing.