Love it or hate it, SAP IDM is deferring for alternative IDM solutions come 2027. SAP announced this on their website with Microsoft Entra ID mentioned as an alternative.
There are plenty of other IDM alternatives, but each will require integration.
|Known records breached
|Month of public disclosure
|Real Estate Wealth Network
|Construction/ real estate
|Indian Council of Medical Research (ICMR)
|IT services/ software
|IT services/ software
|IT services/ software
|Dori Media Group
|SAP SE Bulgaria
|IT services/ software
The collective store of knowledge has been generated by human in forums, reddit, stackoverflow, websites, Wiki, and search engines. We have used this to train ChatGPT. Since GPT4, the generation of knowledge is moving from the public-sphere into a direct private chat. This is a problem.
So the generation and capture of knowledge is now privately sourced and pooled into a Machine by-passing the human public domain.
Where will the next AI model get its training data from? GPT-Next will be trained on legacy data.
This raises multiple questions related to Ethics, the sanctity of Data Access, and an increasing importance to legislate for public data.
AI will become a dominant source of knowledge simply by virtue-of-growth. It depends on training data which could become unavailable, or monopolised, or licensed by a chatbot.
Trusting the output from AI is just as alarming. If we lose access to the training data, and propagate AI outputs we will lose the collective memory of human-generated thinking and lose the ability to validate AI. Just like search, humanity will be told what to think, how to think, and programmed by AI.
So to avoid a dangerous feedback loop, we need to individually, and collectively support (and extend) the public domain.
So 2023 started with a big buzz on ChatGPT. OpenAI announced ChatGPT availability just before Christmas, and soon the internet was abuzz with excitement. ChatGPT app gained some 1 million users after 5 days, and 10million users after 40 days. This is way faster that previous upstarts like Instagram, Twitter etc…
So what exactly is ChatGPT? It’s an intelligent chatbot driven by the GPT-3 model. GPT stand for “Generative Pre-training Transformer”. Essentially, its a weighted Neural Network with 175 billion weighted connections. This is the largest trained neural network to-date. Microsoft maintains the next largest “Turing-NLG” model at 17 billion connections. GPT-4 will have ~500 billion connections. The human brain has approximately 86 billion neurons (sometimes less). The model includes reinforcement learning, and supervised learning. There are also aspects to the architecture which include components like The Encoder, The Decoder, Language Model, Pre-trainers, Fine-tuners etc…
So ChatGPT-3 has sparked lots of interesting conversations and use cases. People have started to apply it to work, assignments, content-generation, coding, and general life questions. This was made possible through the training of the model with a terabyte of data. Will it surpass the Google search engine simply by being able to create a more intelligent answer beyond the search result?
- The Transformer architecture: The Transformer architecture is the foundation of ChatGPT. It is a neural network architecture that uses self-attention mechanisms to process input sequences. The transformer architecture is able to handle input sequences of varying lengths and allows for parallel processing of the input.
- The Encoder: The Encoder is composed of multiple layers of self-attention and feed-forward neural networks. It processes and understands the input text.
- The Decoder: The decoder is also composed of multiple layers of self-attention and feed-forward neural networks. It generates the output text.
- The Language Model Head: The language model head is a linear layer with weights that are learned during pre-training. It is used to predict the next token in the sequence, given the previous tokens.
- The Dialogue Generation Head: The dialogue generation head is a linear layer with weights that are learned during fine-tuning the model on conversational data. It is used to generate the response to a given prompt in the context of a dialogue.
- Pre-training: ChatGPT is pre-trained on a large dataset of text, which enables it to generate human-like text in response to a given prompt.
- Fine-Tuning: The model is fine-tuned on conversational data to improve its ability to generate responses in the context of a dialogue.
Training is another interesting consideration. How much data is needed to train GPT-3? (570 gigabytes of text). How long did it take? ChatGPT is trained (censored/biased) to avoid returning harmful answers. As the size of the model increases, training time and data will also significantly increase.
A concern for the future is that ChatGPT model are biased. Another concern is that humans continue to get progressively obsolete. Ethics for AI is going to be crucial for humanity.
- GPT-1 (Generative Pre-trained Transformer 1) was the first version (in June 2018) of ChatGPT released by OpenAI. It was pre-trained on a dataset of 40GB of text data and had a capacity of 1.5 billion parameters.
- GPT-2 (Generative Pre-trained Transformer 2) was released shortly after in Feb 2019. It was pre-trained on a much larger dataset of 570GB of text data and had a capacity of 1.5 trillion parameters, making it ten times larger than GPT-1.
- GPT-3 (Generative Pre-trained Transformer 3) was released in 2020. It was pre-trained on a massive dataset of 570GB of text data and had a capacity of 175 billion parameters. It was fine-tuned for a wide range of language tasks, such as text generation, language translation, and question answering.
- GPT-4 (Generative Pre-trained Transformer 4) was released in 2021, it was pre-trained on a massive dataset of many terabytes of text data and had a capacity of over 500 billion parameters. It was fine-tuned for a wide range of language tasks, such as text generation, language translation, and question answering with even more accuracy and fluency than GPT-3.
Current Limitations of ChatGPT
- GPT-3 lacks long-term memory — the model does not learn anything from long-term interactions like humans.
- Lacks interpretability — this is a problem that affects extremely large and complex in general. GPT-3 is so large that it is difficult to interpret or explain the output that it produces.
- Limited input size — transformers have a fixed maximum input size and this means that prompts that GPT-3 can deal with cannot be longer than a few sentences.
- Slow inference time — because GPT-3 is so large, it takes more time for the model to produce predictions. Imagine how long GPT-4 will take?
- GPT-3 suffers from bias — all models are only as good as the data that was used to train them and GPT-3 is no exception. The data for GPT-3 and other large language models contain biases. This already intentionally includes “hate”-speech biases, religious biases, political biases.
- Training Time – With GPT-4 coming-in with 500B parameters, we can see a 2.8x increase in parameters. Is the trajectory slowing down?
Here are some links to articles:
World Economic Forum 2023, Davos: The World Economic Forum (WEF) meet is scheduled to be held in Davos, Switzerland, from January 16 to January 20, 2023. The forum has returned to its traditional January slot under the theme of ‘Cooperation in a Fragmented World’, where leaders from industry, and government fly in on their private jets.
The WEF aims to look at how they can tackle the imponderable and interlinked challenges the world is facing and find solutions through public-private cooperation. However, it’s important for countries to ensure that decisions by governments are not affected by foreign influence and are made by elected-officials representing the populace.
The event will be hosted by the World Economic Forum in cooperation with the Swiss government and the Canton of Graubünden.
Items on the agenda are:
- Central Bank Digital Currency
- De-globalisation (oxymoron)
- Biotech for food security
- Data Privacy
Independent Media currently on-site:
- True North
- Rebel Media
Police in the Bahamas have arrested the founder of collapsed cryptocurrency exchange FTX after criminal charges were filed in the US.
Sam Bankman-Fried was the lead scammer who laundered money through FTX. The company that was valued at $47B before speculation FTX was running insolvent led to a liquidity crunch that ultimately brought down the exchange and has triggered a contagion effect throughout the world of cryptocurrency.
In a statement, Bahamian Attorney General Ryan Pinder said the country is expecting an extradition request to follow shortly.
“As a result of the notification received and the material provided therewith, it was deemed appropriate for the Attorney General to seek SBF’s arrest and hold him in custody pursuant to our nation’s Extradition Act,” Pinder said.
“At such a time as a formal request for extradition is made, the Bahamas intends to process it promptly, pursuant to Bahamian law and its treaty obligations with the United States.”
Despite accusation that Bankman-Fried defrauded FTX’s customers, he has remained in the public eye making public appearances from the Bahamas where his now bankrupt company was headquartered. Sam desperately needs to remain in the public eye to avoid being “whacked”. Several crypto CEOs have recently been found dead.
Two weeks ago, he appeared at a New York Times conference, where he said he “did not try to commit fraud on anyone”.
Last week, SBF agreed to front to a US House Committee and tweeted that he thought of himself “as a model CEO, who wouldn’t become lazy or disconnected”.
“I’m sorry,” he said. “Hopefully people can learn from the difference between who I was and who I could have been.”
The sudden collapse of FTX impacted thousands of Australians who had funds on the exchange, and has created market conditions that recently led to one of the country’s largest exchanges Swyftx laying off 30% of staff.
Current CEO of FTX John Ray III – who is administering the company through its bankrupty – also oversaw the infamous Enron bankruptcy. FTX was a donated large sums of money to the Democrat party, and to certain Republican leaders (like Mitch McConnell).
“The FTX Group’s collapse appears to stem from the absolute concentration of control in the hands of a very small group of grossly inexperienced and unsophisticated individuals who failed to implement virtually any of the systems or controls that are necessary for a company that is entrusted with other people’s money or assets.”
This looks like a page out of the money laundering playbook.
Future conflict will involve sophisticated cyber warfare. Nations across the globe have recognised the strategic value and asymmetric advantage of investment in offensive cyber capabilities. They continue to evolve and advance their capabilities, contributing to an evolving strategic environment. In this context, cyber security is now one of our most critical tools to defend our people, capabilities, and ultimately, our nation.
On 31 August 2022, the Assistant Minister for Defence and Assistant Minister for Veterans’ Affairs, the Hon Matt Thistlethwaite released the Defence Cyber Security Strategy, outlining how Defence will strengthen its cyber security posture over the next ten years.
The Strategy details how Defence will combat cyber threats and ensure its capabilities are secure against attacks from adversaries. It presents the path to a cyber resilient Defence and the principles to maintain a strong cyber security posture in a shifting strategic environment.
Ultimately, the Strategy will contribute to a high-performing One Defence enterprise that can continue to deliver on its mission of defending Australia and its national interests.
The Strategy was released alongside the complementary 2022 Defence Information and Communications Technology Strategy.
Aussie company Patagonia is donating its profits to fight climate change. You’ll see that splashed across the news.
However, could the real story be tax evasion?
You heard about Patagonia and their big donation? Interested in another perspective?
The Patagonia headline is “Billionaire Gives Away Company to Fight Climate Change”.
Wow that’s awesome right⁉️🔥
However, there’s more to it than most realize.
Yes 98% of the company is now owned by a non-profit that will leverage the $100M in profits to fight the environmental crisis.
BUT Yvon Chouinard and family save $700 MILLION in taxes.
He moved all his family voting stock, appx 2% of its total shares to an entity called Patagonia Purpose Trust. This entity now can make unlimited political donations in the U.S.
The deal is designed for the ultra-wealthy to use non-profits to exert political influence long past their lifetime.
He won’t have to pay $700M in capital gains taxes for selling a $3 Billion company.
He avoids paying a 40% estate and gift tax when transferring large fortunes to heirs.
When asked for comment on the tax implications… crickets. 🦗😶
I’ll share the Bloomberg article here and in the comments.
I love the idea of positively impacting the planet. But “battling climate change” can mean a lot of things.
What do you think about all this⁉️😲👇🏾