What ought?
It is so fun to be competent. I dunno man, I think being decent at something is a large part of the fun. I wonder whether I can have fun being like really mediocre and something. I mean... I don't have to be like top 1%, but like... at least top 10%...? Like... I really think if I sing better I will enjoy it more, or dance better, or play an instrument better. Isn't it more fun when you are good?? Or at least not TOO bad.
Competency however requires high cost of practicing and whatnot. which requires time and energy and such.
I read "the AI con" recently. Interesting book about how this AI hype cycle is no different from the rest.
Now, here are my thoughts that include a significant amount of ideas from the book. However, i'm not an LLM so these are my own thoughts and not a regurgitation or a rehash (qualified later).
Book + brief thoughts on book
So on the things that I agree with the book (well at least how I interpret the book to say):
1. LLMs are a dead end, no artificial general intelligence is coming out of LLMs
2. Much of what AI purports to be able to do, it cannot actually do (especially in the area of replacing workers)
3. Corporate leadership jump on the AI bandwagon too much and think that it can do much more than it actually do. In other words, AI appears better to leadership than to workers.
What this implies then is that:
4. The current capital allocated to AI is inefficient and there are opportunities to profit by betting against, especially, pure play LLMs - not that I am intelligent enough to figure out how. I tried for maybe 30 mins or so and gave up.
Ok. The book also seems a bit woke to me and talks about racism, environmental damage, oppression of poor and marginalised communities and other such inequalities. I'm not particularly convinced that AI is an exceptional harm on that front, any more so than the current structures already in place. At most it is a tool used by the powerful, but if not AI, I reckon any other tool would be as ably wielded. The fact that AI biases are embedded within AI is just a net neutral... no? I mean racist AI judge is the same as racist judge / jury. Wrt the environmental damage due to high energy cost, well I do believe that the world will reallocate capital away so it seems to me that this is just a short term hit, if any. You can't be right on both fronts that AI will doom the world and that AI doesn't work as people think it will... right? I doubt people are THAT deluded.
The book also goes into the history of AI hype cycles and covers the impact of AI in work -> sucks, doesn't work. Social services and the creatives -> sucks AND not human (not just sucky because not human) and how fearing AI that will takeover the world or that AI will lead to a better world is both sides of the same kind and AI will do none of that sort - well I suppose the analogy is quite apt that it will be something like maybe the steam engine over horses and wind, relatively and significantly change the world but not in a way that is like THAT big a break.
Theres also a large bit on AI being theft and stealing other people's intellectual property and such and such. To me if it is crawlable or on the internet then it is usable for AI training theres no SHOULD or SHOULD NOT, it just is and if you think it is a SHOULD NOT then it is on you to figure out how to enforce the should not -> sabotaging some datasets sound really fun and again can be done. This is the new reality. Theres also a bit on how it exploits people in the third world to do a lot of the manual training of AI models... well, nothing different from regular exploitation IMO.
AI - at best a useful next-rung tool
My main working theory is that this current version of AI and versions for the foreseeable future will suck and most jobs will be well protected from AI. And we should define AI here. The book actually goes to say that AI is such a shit term and is purely used for hype making people think there is intelligence when there is actually none - together with hallucination and other anthropomorphic terms. Within the current field of AI, there are quite a few distinct things that are being done, each of which have their own use cases. Examples of these would be categorising, decision making (or recommendation), text and image generation.
To me, I like the non-text based AIs a lot more because I think their use cases are clearer by far and there is no false impression that it will change the world apart from their specific use cases. I am a firm believer in AI as a large data model, with the purpose of predicting by extrapolating from past data except that the dataset is too large and complex to set explainable and deterministic rules. This would be, of course, why I own lmnd stock. But digression aside, this is actually what AI (or actually this deep neural net thing powered by massive compute which is what i'm going to mean when I say AI) is doing in all the forms be it categorising, decision making (or recommendation), text and image generation. I think if we think of AI as prediction machines from training data rather than possessing any understanding that would be quite useful. Here we have to recall the chinese room thought experiment from the philosophy archives and know that there is really NO ONE in the chinese room and hence no understanding DESPITE it being very very intuitive to us humans that proper wielding of language = understanding.
Honestly, regarding work, I feel like we are are so far away from using deterministic rules automations (i.e code) that we really don't need to try this AI shit for a large majority of the tasks because simply put, the data isnt that large or complex that it cannot be put into deterministic rules. The obvious benefit of deterministic rules over AI are that they are explainable and, well, repeatable which the AI isnt. Insofar as the rules are upkept (as I presume the AI's training data will be updated), code automation is more useful and practical any time.
Well then, obviously it follows that the more narrow the use case and the more tailormade the data supporting the use case, the better the results. This is because there will be insane amounts of noise if insane amount of data is put in, as any data person can tell you. Bespoke AI to do certain tasks will have an advantage if useful data is in large quantities and very snap decisions have to be made such that they may outperform humans on that very particular task, and these are seldom text outputs (LLMs) i'm thinking of but more like e.g classifying whether a particular transaction is likely fraudulent out of many many.
Thoughts on LLMs - stuck in the text
LLMs through some impressive amount of training essentially are almost able to "rule based" most languages and are able to output sensible sentences and paragraphs. This is already an incredible achievement, as many people seem still unable to speak in proper english. In my opinion, teaching proper language skills / rewriting in a style is probably one of the best use cases for LLMs because that is exactly what the training data is for (assuming there is some cleaning in the quality of the english or at least a large majority is good quality english hence doesn't matter).
However, when you start using LLMs to make predictions or things on the real world, there is this very very obvious gap in information between what the LLM is trained on (i.e text) and the real world. Text, however useful it is, is but a limited representation of the real world and most definitely not the real world. If you do bible study with any amount of rigour, you would know that any text assumes a lot of hidden common shared non-textual information and is actually a means of communication between a reader and the writer. By chucking in a massive amount of text, presumably it is stripped away of its context AND the information stored in the "metadata" e.g this blog post is written by me at this time. Even if this "metadata" were included, the next step of deducing whats the significance of me and this time (e.g 2026 when AI is at such and such stage) would be very difficult for the LLM to discern which are the relevant facts (assuming that it has the relevant facts somewhere in its large database).
For example, I was playing around in some LLMs, writing a story about Singapore and I noticed that it could spit out things that are commonly written down e.g common singlish, place names (especially web famous), weather .etc. Nonetheless, it made quite some rather absurd assumptions e.g that 25-35 characters stay alone - even more absurd, in a HDB, that random Chinese name is Muslim hanging out with a bunch of Muslim girls, that teachers start work at 9am. Presumably it knows that most people stay in HDBs, that these are common names in Singapore, that Singapore has a significant Muslim population .etc. Presumably if I prompt it to think about that particular issue or fact specifically it would output the right answer but if it is unprompted, the prediction would be too general and thus wrong. As such, in theory, you could prompt engineer everything deeply but then am not sure that the marginal gains are worth the very lengthy prompts and also it seems to me that LLMs don't work too well with very lengthy prompts. Also simple continuity issues really make writing more than say 3000 words a very big pain. I don't even know why that is given that the context frame for newer models are now very large, seems like the LLM is unable to correctly identify emphasis or need for continuity and get it from within the context frame and sometimes just plugs it with generic values from training base. Side note: I also find the writing style of LLMs for the narrative really repetitive and boring, like its boring prose, very clichéd.
Putting aside the fact that it has randomness inbuilt, the very fundamental project of having LLMs give correct statements about reality seems to me to be fundamentally futile if we understand that LLMs are text prediction machines trained on text because text in itself is not a 1. full and complete 2. current and updated 3. POV-less representation of reality. This is made worse by garbage in garbage out and the fact that language is always evolving, as is reality.
In theory then LLMs should have a decent use case in areas of abstraction whereby language or symbols are reality by definition e.g math. I think a purpose built math LLM would do quite well and perhaps be able to potentially challenge some frontiers in math (i dunno for sure). But general LLMs seem to quite suck at even basic math at times due to randomness while math is deterministic. Furthermore, unless it is a very obvious application I highly doubt the LLM's ability to predict a novel application outside its training dataset - and our math as of now seems to require very novel applications of disparate observations, long gone are the days when you can just think hard about something recursively and come up with new designs (though to be fair to the pioneers, theirs were probably very novel applications as well and a LLM trained on text at that time would suck incredibly.)
So what LLMs have going for them is a massive amount of text that has some verified claim to reality or at least a communicated representation of reality and massive computing power to process everything and lots of poor people to train it to be "better" manually. Insofar as it is a massive corpus of text that is somewhat organised, I guess there is some value to be had there although the next best alternative (not even sure that it is a worse alternative) is a search engine that can find you the base text for you to read, interpret contextually and make judgements by yourself. Instead you get this generalised output from the LLM that might include low quality crap and reasonably quality crap all muddled together such that you can't tell the difference and without sources to boot. So you end up with really slop, low quality trash that might be passable in many cases but you don't know which is full of trash because it is stripped of all context and your brain becomes dumb because it just eat the slop.
Adding something on about work
I also fully agree that managements love AI a but too much with too little understanding and shoving it down employee's throats will bring no happiness to anyone, not least the employees. So far all the LLM solutions that I see in my workplace all really cannot make it. If not for this hype and oh yeah look at us do AI, i'm quite sure that they will all be thrown out. The problem is really that the AI solution is not bespoke enough, uploading a few documents into a "knowledge base" and slapping on a generic LLM is WORSE than just ctrl-f ing the documents.
Especially with a particular agency that thinks there are a million useful use cases for bots that can be created in 5 minutes. Wtf seriously how do you think that you can create anything useful in 5 minutes??? There are just not enough parameters adjustable for it to give any level of differentiation with just talking to a generic LLM. Honestly they are super lame, grok "projects" probably is their entire model and the value add is... suspicious at most just yknow you dont need to repeat the prompt all the time i.e templated prompt. Its crazy why its still being funded and pushed.
Think I need to learn how to order my life again, after I have hard banned myself from playing neopets, it followed the predictable pattern of worse and worse substitutes, coinciding with a great lack of willpower in the external circumstance e.g work getting more busy. Predictably, I start to clutch at straws when I spiral and so spiral down more and more.
I don't like myself being self-absorbed. I must say, I think that on the whole I have improved over the years in these cycles. I don't like myself when I am going through these cycles, and I say that very clearly now, presumably, somewhat outside of the cycle. But inside, the things going through my mind are so. "I want this" "I really want this" "Why don't I have this" "How can I have this" scheme scheme plot plot, imagine and fantasise. Like really kinda crazy.
Then it dominates my brain, in the way that makes everything else become noise. Yknow, the way that presumably God should dominate your mind where you focus on Jesus and the things on earth go strangely dim. Here we have the opposite where you become self absorbed and everything else goes strangely dim.
Tbh, I think that every cycle it gets actually much better. I don't think there was a noticeable dip in work or social "performance" and I duly read my Bible but of low quality and prayed but also of low quality.
Is this the upward climb. Whereby every bit of progress is hardfought and accompanied by a significant slide back?
Maybe I am trying to do too many things this year, who knows? Anyway I am reading a book on AI sucking. Am keen to clear it and write something about it.
Recently life has been not so easy, because I am short on willpower because work is busy, and I still can't find a way to make my neopets time better, and instead, it might be worse. But while i am short on willpower to achieve thing through my own means and I can exhaust my actions as futile, I am not short on grace and mercy from God.
I would like to cling closer to God, to trust that whatever is my situation now is what he has planned for me for my good. And therefore, to be content in the present. And to trust that whatever will come in the future, is also for my good. And therefore, to be hopeful about the future. And to trust that whatever has gone by, was planned for me for my good. And therefore, to be thankful for the past.
You know, when you see things from God's eye view, all the things grow strangely dim. What is a promotion versus a soul saved for eternity? A hundred thousand dollars vs a brother turned back from the road to damnation? Or what is 10 years of suffering if it makes me a better saint?
And then you plunge back into reality, into the earth eye view and then a promotion becomes very important because it impacts my future earnings and financial security. A hundred thousand dollars becomes 3 years of living expenses or 2 years of travelling. And the 10 years of suffering really really really suck, and I don't want and I want to go home now.
Oh that we could be transcendental always.
Maybe I need to rethink my social policy again... I think i'm too clingy towards people. Like if I see someone's soul, and it is beautiful, I try to keep it in my life. Or maybe not just soul hor, maybe if pretty I also try to keep lol. But its sometimes a futile task and what that is thoroughly 无奈. But i am really really bad at cutting losses early until other people have to cut losses for meeeeeee. Especially in relationships seriously. I'm such a clown thinking that more effort and time will solve.
Hmm.
It is unfortunate that it takes so much effort to read and write, or even engage with what garlica is talking about after he reads and writes.
Given revenue games, I feel like I have been working for 12 days consecutive and I am exceedingly tired. Work has also been very tiring because I am actually try hard to produce something and facing some difficulties in producing it. Not to mention that it is rather sian to knock work out of the park.
On the bright side, this tiredness seems to have snapped some distasteful things.
On the down side, I injured my heel
On the up side, it is mostly recovered.
I am rambling. Think I should read the Bible pray and sleep.
Hmmm.
So i decided to cut out playing armorgames or neopets or any sort of game in order to use the time better. Honestly on relatively free days, I spend maybe 2 hours on these things or maybe even more. So it is designed so that I can be bored and actually use the time productively, trying to tackle one the largest 'waste' of my time.
But I have found that it is very hard. And this coincided with me not planning much things on Christmas / New Years, and suddenly I had a rich abundance of free time that I wanted to spend with other. But others were occupied. And then, guess I felt a bit sad. But yeah I suppose it is lack of forward planning on my part and assuming that select people will be spontaneous.
Anyway the larger point is that it is very hard, to replace this large block of activity.
This large block of activity serves two purposes:
1. It is low energy, low willpower, easy to do. (This is as opposed to something like writing a blog post which is high energy)
2. It gives a sense of long term accomplishment when I see number go up
3. It doesn't take that much bandwidth (in most situations, although many times there is a cost for number go up optimally)
As such, it is like "cheap meaning", low energy for some feeling of meaning. This value proposition is quite unbeatable, which is why it has persisted for so long - it is a pseudo productive activity.
So when I am bored, what are the natural substitutes? It is like low energy, low willpower things. But most of these have even lower levels of meaning, i.e essentially none. These would be things like watching anime or youtube or playing around with gen ai or social media. Since stopping those games, my youtube / insta reels consumption has jumped.up quite a bit. And when i partake in such activities, I find them exceedingly hollow and want to find something more exciting, and thats not good. Exciting activities for me is a very very sharp double edged sword which generally is bad for me.
It feels like playing a low cost game like neopets is really the best I can do with the amount of existing energy because the natural alternatives are worse. It is like the lesser evil.
Well, of course the alternative is to infuse ALOT of willpower and try an unnatural alternative to make it natural by building a new habit. But the alternatives are all a lot costlier in energy. Things like reading / writing / playing piano ... even like going out intentionally to do some activity. There seems to be no... easy intermediate step. This gap is HUGE and I dunno if I have that much willpower.
What a shitty predicament. Because it also took a reasonable amount of willpower to stop playing the games. This is truly a local peak that seems very hard to escape. I think in theory escape it I must. I won't be pleased if I am a very rich neopian in 10 years though I might try to bluff myself that I am.
I suppose there are some reasonable intermediate steps, like sleeping earlier. I.e just drain the energy into something useful and then sleep earlier but this also runs into problems like not feeling like sleeping yet and such.
Hmmmmmm. should i just go back to playing? obvious answer is no right. But otherwise I do worse things naturally. Hmmmmmmm.
I really wanna not waste time. But it is so hardddd.
There seems to always be this need, within my life, to strive for some form of progress, some tangible visible form. And well it can come out in many ways, but most notably in some sort of long-timeframe game where progress is made incrementally over time and i as the player, improve.
But its a freaking itch!!! I cannot leh, while I like my brain that is not hooked on it, I feel like my free time just gravitates towards it. I can try to replace it with less time consuming forms, like neopets or what, but it is so hard to break. If only following God gives me a tangible growth bar, i'd minmax the shit out of it. But well, arguably, there is la, I just don't do it.
[[To be]]
[[The Story Thus]]
|January 2008|February 2008|March 2008|April 2008|May 2008|June 2008|July 2008|August 2008|September 2008|October 2008|November 2008|December 2008|January 2009|February 2009|March 2009|April 2009|May 2009|June 2009|July 2009|August 2009|September 2009|October 2009|November 2009|December 2009|January 2010|February 2010|March 2010|April 2010|May 2010|June 2010|July 2010|August 2010|September 2010|October 2010|November 2010|December 2010|January 2011|February 2011|March 2011|April 2011|May 2011|June 2011|July 2011|August 2011|September 2011|October 2011|November 2011|December 2011|January 2012|February 2012|March 2012|April 2012|May 2012|June 2012|July 2012|August 2012|September 2012|October 2012|November 2012|December 2012|January 2013|February 2013|March 2013|April 2013|May 2013|June 2013|July 2013|August 2013|September 2013|October 2013|November 2013|December 2013|January 2014|February 2014|March 2014|April 2014|May 2014|June 2014|July 2014|August 2014|September 2014|October 2014|November 2014|December 2014|January 2015|February 2015|March 2015|April 2015|May 2015|June 2015|July 2015|August 2015|September 2015|October 2015|November 2015|December 2015|January 2016|February 2016|March 2016|April 2016|May 2016|June 2016|July 2016|August 2016|September 2016|October 2016|November 2016|December 2016|January 2017|February 2017|March 2017|April 2017|May 2017|June 2017|July 2017|August 2017|September 2017|October 2017|November 2017|December 2017|January 2018|February 2018|March 2018|April 2018|May 2018|June 2018|July 2018|August 2018|September 2018|October 2018|November 2018|December 2018|January 2019|February 2019|March 2019|April 2019|May 2019|June 2019|July 2019|August 2019|September 2019|October 2019|November 2019|December 2019|January 2020|February 2020|March 2020|April 2020|May 2020|June 2020|July 2020|August 2020|September 2020|October 2020|November 2020|December 2020|January 2021|February 2021|March 2021|April 2021|May 2021|June 2021|July 2021|August 2021|September 2021|October 2021|November 2021|December 2021|January 2022|February 2022|March 2022|April 2022|May 2022|June 2022|July 2022|August 2022|September 2022|October 2022|November 2022|December 2022|January 2023|February 2023|March 2023|April 2023|May 2023|June 2023|July 2023|August 2023|September 2023|October 2023|November 2023|December 2023|January 2024|February 2024|March 2024|April 2024|May 2024|June 2024|July 2024|August 2024|September 2024|October 2024|November 2024|December 2024|January 2025|February 2025|March 2025|April 2025|May 2025|June 2025|July 2025|August 2025|September 2025|October 2025|November 2025|December 2025|January 2026|February 2026
[[The Talk (also silent)]]
[[The Ancients]]