Sunday, May 21, 2023

Truth to Power.

 Unless you're a politician with a "lower calling..." or a well paid member of the mass media you probably don't understand or care that freedom once won by the blood and sacrifice of a people should never be squandered or sold.  

https://t.me/GeopoliticsAndEmpire/35981

Time to take serious stock into how the power behind the media outlets and their useful talking heads have become an outrageous  propaganda tool.  

CHD 

LOL, did they bully you into taking the mrna experimental shots? 

Or this trip down memory lane...  IF YouTube/Google/Blogspot will let you see it!  Maybe better to copy and paste the url in your own browser.  That way you can probably see it.  This is an example of shadow banning.  

  https://youtu.be/zI3yU5Z2adI 

When a message comprised of truthful clips gets censored, you know you're over the target. Don't forget the part all of these uninformed or worse - informed - actors played in creating the recent catastrophe.  Digital bullies acting in the dawn of a dystopian era.  Listen carefully to the crafted and scripted voices.  Contemplate if these individuals understand what a free society is.  Ask yourself what we can learn from this dry run attack on your personal freedoms...  

AML NLO, JRO and SEO AML

Wednesday, May 17, 2023

Uh-oh AI warnings a-plenty...

 Many years ago, as a grad student I studied and wrote some AI (coded if you will).  The data sets back then were pretty logical and binary.  However they could be ambiguous. That is to say inferences were made based on hard logic and of course you as the programmer decided how you were going to weigh - favor data like values that were out of range etc.  The goal of course was to drive a correct conclusion.  This stuff eventually became fault detection algorithms used in a plethora of applications.  Fault diagnostic tools used in cars to isolate a bad sensor for example.  Just to add a little color here, these AI systems have been around for 30 years and can still get it wrong.  Rarely, but it still happens.  Reel the hands of time forward a few decades and we have some influencers pointing out a very old axiom.  Garbage In = Garbage Out.  GIGO if you're old enough to remember that acronym. Today we are applying the approaches to far more complex and data rich environments.  Rich in volume but not necessarily accurate or even with known accuracy.  Read on.

If you data mine bad or misleading inputs you will come up with misleading results. AI is not some prescient wonder worker,  On the contrary it has become a sophisticated aggregator with rules written by programmers that steer the algorithms to favor certain results.  Test it for yourself, take politically dubious statements and see what kind of a chat you come up with.  There's your first data point.  Filtered - some would say censored - results are a common practice and even if liberal boundary conditions are applied, bad inputs (even scholarly articles that were bent towards funding and not truth) yield crazy bad results.

Kind of an obvious conclusion for those in the trenches but a revelation for the mystified general public and I suppose for the equally mystified oligarchy that was hoping to replace people with software...

That play has already been written to disastrous results in  various places.  Only the folks in the trenches don't get a lot of air time to make that clear.  It's good that people with a microphone are finally listening and talking.  Taking any code (AI or otherwise) and putting complete trust in its veracity is for the naive or ignorant.  Applying it to real solutions without rigorous testing - same.  Actually reckless.

In order for AI to give consistently meaningful results it needs truthful, accurate and correctly represented data.  LOL, today's world is filled with a lot of garbage.  I had to laugh when I watched a TV "news program" give us a speech on the concerns for mis-information as an AI result.  Now lets see, if the truth is counter to the narrative of the day...  that is often labeled mis-information.  If the current state of the art is wrong and AI simply echos that - it's really mis-information, and if you are promulgating propaganda, AI will be a very wild animal to tame!  You probably need your own data set of whacked out versions of facts for that.  I'm sure somebody is working on that right now.

All of the terra bytes of mouse clicks and e-mail responses... all of the url traffic and those downloaded papers from those sketchy corporately funded "scientific journals" none of this has a truth detector or label that holds anybody accountable for fabricating results.  Unlike the German Beer purity law!   LOL, good luck with the whole AI thing.  I'm sure it will be useful at times, absolutely wrong on occasion and simply worthless at times as well.  In a gross sense: untrustworthy. Let's not forget AI is written by people.  

Let's see publication transparency - programmer names, funding sources and descriptions of over what data and what programmed weighting factors and filters were applied.  Were there barred names and/or words used in the algorithms? Were filters that assign a weighting factor applied? All of that should appear on a warning label to be read and agreed to on the entry point to the website or printed out on the front page of any results that printed off.  Maybe then you or your institution can be a judicious consumer of the conclusion a certain AI program comes up with.  Then if it's harmful or wrong you have recourse for legal remedy.  Hmmmm, there's an idea!  How about a disclaimer on the evening news as to if any photo shop materials were used?  How about digitally enhanced video or images?  LOL, yup lots to be fixed to get this right.  We want to feed the AI good stuff don't we?  How much un-vetted, unvalidated stuff are we consuming without an AI generation disclosure today?  LOL... ah the mind wanders.  

AI is a software tool.  Written by people, data mining what people write and publish, what people click on and makes assertions on that pile of inputs.  It reads sensors that have ranges and various accuracies and all of it can go bad in various ways. Its developers attempt to incorporate additional functions like facial recognition, speech spectral nuances... and tons of other stuff from the IoT.  How much of this is really well understood as pertains to a distilled result? Me thinks AI is still in its infancy with lots of vulnerabilities but I'm sure it will progress as a tool.  It will get "cleaned up" perhaps it will give consistently correct results for narrow applications, but what release of the software will you be experiencing?  How will you know?  Maybe a legal terms and conditions can slip in a low confidence interval in a ULA for the cheap release and a giant jump in user fees for the "pro" good stuff? LOL, marketing gold.

What if the memory location the program and data set resides on gets corrupted?  Hacked into or simply electronically damaged?  There's a lot of DLLs that can wander into RAM and cause mayhem.  Would that be like a digital stroke - LOL?  or maybe a nefarious data set mod that is indeed a well targeted act of harm... how would that be safe guarded against?  Keep those jump drives away!   Could you ever cede human decision making in high stakes applications to software with complete trust considering all of the ways software and hardware can be attacked?  

Perhaps a simple financial transaction and the role of "order taker" at a fast food place is an easy and overdue target for  an expert system (weak AI with speech synthesis) but moving much further beyond that carries substantial liability risks that most informed ownership should be careful to accept.  Alpha, Beta and Gamma test the heck out the tool before you adopt it! Then realize like any software it has a useful life dependent on upgrades, operating systems, hackers... not an Easy Button when you look at the whole picture.  Looks like an AI insurance policy is just what the actuarials need to get going on. Could be a new revenue line...


AML JRO, NLO and SEO AML

 

Monday, May 8, 2023

Perspective

 How many of us stop to think about how fuel hungry the current global trade model is?


An average tanker (the red icons) consume about 180,000 kg of oil per day at sea... 

The average commercial jet airliner consumes about 120,000 kg of jet fuel per day



These two snapshots were just a peek at traffic happening today.  A day like any other day.  If we truly embrace the fact that oil is a finite resource and we are well into experiencing its slow depletion... how does logistics like this make sense?  How much of this is superfluous? 

Change in the way we live is an absolute certainty.  When you hear about green house gasses and carbon blah blah blah, take a moment to ponder what happens every day in order to get your widget delivered to your door.  Or just to keep the fuel tanks full at the corner gas station...  

In a year of continuous operation, just a single tanker at sea will consume 65,700,000 kg of oil... or 413,000 bbls of oil or 17,346,000 gallons or 2,317,425 cubic feet or 85,830 cubic yards of never to be seen again petroleum. As a point of reference, 3,300 cubic yards is the volume of one olympic sized swimming pool.  Thats 26 olympic swimming pools full of refined petroleum per year!  Just a little over 2 pools burned per month.  

The single average airliner continually operating, burns 43,800,000 kgs of refined oil...   or 275,000 bbls during that same year.  

Of course no ship or plane runs 24/7/365, but there are tens of thousands of them in the air and at sea  running intermittently day and night which makes for a continuous non-renewable consumption of an alarming rate.  The true cost of this global trading model is astronomical and unsustainable.  This contrivance is a blip in the history of the World that cannot be sustained and inefficiently drives a consumption and "growth" based economics fairy tale.  

Physics will win in spite of bit coins, digital whatever and delusional planning.

Changes that physics will drive will impact us all in the next 20 years or so (in a single generation!) We can, if we're cooperative, tenacious enough and smart enough, drive the logistics of trade to "manufactured sustainably and locally, consumed locally, and recycled locally".  But there will be no place for the current economic system with its unrealistic grasp of reality.  That perhaps is the bridge too far for planet Earth.

Perhaps - if this world is to exhibit intelligence and evolve - we can all agree not to waste our gifts on failures like war, corruption and greed.  Wouldn't that be an excellent outcome?  


AML NLO, JRO and SEO AML