4 ผลลัพธ์ที่เกี่ยวข้อง

back to what is AI?

Diary of AI CEO [EP.3] : Back to What is AI?

3 Jan 2025

With so so so much noise about AI nowadays, for laymen, this could be overwhelming, confusing and scary at the same time. Obviously, imposters will disguise a lot of things under the blanket term AI. The market and potential buyers will have a peace of mind that they have purchased according to the boss’s instructions that “we have to adopt AI NOW!” Back to What is AI? Deep Learning = AI and this is the most relevant Gen AI (Generative AI) = AI and this is also a deep learning but with specific use case of being generative or generating something new from prompt input Machine Learning = AI, although not in the breakthrough sense we experience recently Optimization != AI. Optimization is deterministic. Mostly constrained optimization where there is an objective to minimize (cost) or maximize (sales). Example: Scheduling problem or routing problems. Given the constraint of time and cost of personnel or cost of fuel and number of trucks, try to create a schedule or routing plan that is optimal in terms of cost subject to the constraints. Rule-based !=AI  This is the most opposite to AI but yet some vendors will just claim it is AI. Or even when vendor eventually uses rule-based because there is no need for AI, users will still call it AI since the project starts with the goal of “Use AI to do abc”. Discover unique insights into AI development from the perspective of an AI developer, unlike anything you’ve encountered before, in the upcoming episode of DIARY OF AI CEO. Stay tuned for its release soon. If you have specific topics related to AI development that you would like me to share, feel free to reach out via email at [email protected]. See you in the next episode!

diary of ai ceo EP.2

Diary of AI CEO [EP.2] : Misconceptions about AI & Data in Business

3 Jan 2025

In the first episode of Diary of an AI CEO, I shared insights on Artificial General Intelligence (AGI). Today, based on my experience engaging with both AI enthusiasts and business leaders seeking to implement AI to enhance organizational processes, I have observed recurring misconceptions regarding AI adoption. Therefore, I have compiled four key topics that I am frequently asked about—both by those around me and by clients. I would like to share these perspectives with readers to provide alternative viewpoints for consideration. 4 Misconceptions about AI & Data in Business 1. Misconceptions about AI & Data in Business Data don’t always come where they should. Process understanding is important where, what data and when they become available for AI model, or human-alike, to consume. Often, we collect data in the system which can be fed to AI for training. But in production, in the actual workflow, data has not arrived in time for AI to use then what we trained earlier with the complete final set of data is useless. Having a lot of data does not always equate to business gain from that data. Some data might be completely not useable for example unstructured data requires a lot of effort to turn into structured data that can be used for analytics, insight, or for AI to consume. Some has data in quantity but does not have quality, requiring effort to clean up, handle missing data. In the best case when people think they have a lot of data, sometimes these data have no prediction power on what they want to accomplish. Data is then less useful than people take for granted. 2. Mistrust in the AI : always blame AI first even though the problem lies elsewhere. Bad input : users scan really bad.

what-is-reranking-model

Get to know the Reranking model technique: A key tool for enterprise information retrieval system

17 Dec 2024

We are now in the age of Large Language Models (LLMs). Since OpenAI released the GPT model, we've seen many applications emerge, such as smart search systems, knowledge management systems, chatbots, and machine translation tools. Each of these applications is powered by LLMs at their core. These applications rely heavily on data, requiring us to find similarities and rank results before processing. While the basic and commonly used method is to find similarity based on cosine similarity between two vectors, today AIGEN will introduce a new method: the reranking model. Before we delve into the reranking model that AIGEN uses to improve the accuracy and performance of many AI services, we need to understand the history of ranking. We'll explore the evolution from traditional full-text search using BM25, to vector search, and finally to reranking models. We'll conclude by comparing the metrics of each method, highlighting their pros and cons. How many types of ranking method? Ranking techniques play a crucial role in information retrieval and natural language processing tasks. Some of the most widely used methods include: Full text search BM25 This is a probabilistic ranking function used to estimate the relevance of documents to a given search query. BM25 (Best Matching 25) is an improvement over earlier models and is widely used in search engines due to its effectiveness and simplicity. Vector Similarity This method represents documents and queries as vectors in a high-dimensional space. The similarity between a document and a query is then computed using metrics like cosine similarity or dot product. This approach is particularly useful when dealing with semantic meaning rather than just keyword matching. Methods like TF-IDF vectorization or more advanced techniques like word embeddings (e.g., Word2Vec, GloVe) are often used to create these vector representations. Reranking method

Get to know with AGI

Diary of AI CEO [EP.1] : Get to know with “Artificial AGI”

4 Dec 2024

Given the excitement of Generative AI Large-language model (LLM) intelligence and how it can now solve problems, answer questions naturally like humans, words on the street have started to talk about whether we are near AGI or Artificial General Intelligence. AGI can be defined as the AI that is generalized such that it can deduce new knowledge from old ones through reasoning even on the things it has never seen before. However, despite the incredible smartness we have witnessed of LLMs in recent years, at best these LLMs seem to be able to appear intelligent in so many subject domains because of the way it memorizes patterns from the enormous data it has been trained with. And it does not really use reasoning in creating responses. It is argued here that even if we reached what appears to be AGI through current LLM technology, at best it is only artificial AGI hence the term artificial Artificial General Intelligence in the title (it’s not an editing mistake!) What is Artificial AGI? Seemingly, these LLM’s appear to be almost like AGI. But in fact, at best it can only be artificial AGI or Artificial Artificial General Intelligence. That is, the true AGI should be able to reason and infer new domain knowledge from existing domain knowledge it was trained with just like humans do when it is still a small child. The way humans learn just by seeing a few examples of new information is still out of reach of AGI for the same computing resource and memory. It is true that GenAI LLM can seemingly infer correct answers for absolutely new things it has never seen before (zero-shot inference), but the mechanism that it does was simply from seeing many things similar before, not from “reasoning”.

AIGEN Live chat