current location:Home>Technical Articles>Technology peripherals>AI
- Direction:
- All web3.0 Backend Development Web Front-end Database Operation and Maintenance Development Tools PHP Framework Daily Programming WeChat Applet Common Problem Other Tech CMS Tutorial Java System Tutorial Computer Tutorials Hardware Tutorial Mobile Tutorial Software Tutorial Mobile Game Tutorial
- Classify:
-
- New progress in Li Feifei's 'Spatial Intelligence' series, Wu Jiajun's team's new 'BVS' suite evaluates computer vision models
- In his recent 2024 TED speech, Li Feifei explained the concept of spatial intelligence (Spatial Intelligence) in detail. She is delighted and extremely enthusiastic about the rapid development of the field of computer vision in the past few years, and is creating a start-up company for this purpose. In this speech, she mentioned a research result of the Stanford team, BEHAVIOR, which they "created" A behavioral and motion dataset used to train computers and robots how to act in a three-dimensional world. BEHAVIOR is a huge data set that contains human behaviors and actions in various scenarios. The purpose of this data set is to allow computers and robots to better understand and imitate human behavior. By analyzing the BEHAVIOR
- AI 1058 2024-06-10 14:04:57
-
- Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2X
- Synchronized driving data of vehicle-road collaboration Vehicle-to-everything-aided autonomous driving V2X-AD (Vehicle-to-everything-aided autonomous driving) has great potential in providing safer driving strategies. Researchers have conducted a lot of research on the communication and communication aspects of V2X-AD, but the effect of these infrastructure and communication resources in improving driving performance has not been fully explored. This highlights the need to study collaborative autonomous driving, that is, how to design efficient information sharing strategies for driving planning to improve the driving performance of each vehicle. This requires two key basic conditions: a platform that can provide a data environment for V2X-AD, and a driving
- AI 361 2024-06-10 12:42:28
-
- GPT-4 passed the Turing test with a winning rate of 54%! UCSD new work: Humans cannot recognize GPT-4
- Can GPT-4 pass the Turing test? When a powerful enough model is born, people often use the Turing test to measure the intelligence of this LLM. Recently, researchers from the Department of Cognitive Science at UCSD discovered that in the Turing test, people cannot distinguish GPT-4 from humans at all! Paper address: https://arxiv.org/pdf/2405.08007 In the Turing test, GPT-4 was judged to be human 54% of the time. The experimental results show that this is the first time that a system has been empirically passed the test in the "interactive" two-person Turing test. Researchers Cameron R. Jones recruited 500 volunteers, who were divided into 5 roles: 4 evaluators, namely
- AI 1034 2024-06-10 12:32:27
-
- The open source version of GLM-4 is finally here: surpassing Llama3, multi-modality comparable to GPT4V, and the MaaS platform has also been greatly upgraded.
- The latest version of the large model costs 6 cents and 1 million Tokens. This morning, at the AI Open Day, the much-watched large model company Zhipu AI announced a series of industry implementation figures: According to the latest statistics, the Zhipu AI large model open platform has currently received 300,000 registered users, with an average daily The number of calls has reached 40 billion Tokens, of which the daily API consumption has increased by more than 50 times in the past 6 months, and the most powerful GLM-4 model has increased by more than 90 times in the past 4 months. In the recent Qingtan App, more than 300,000 agents are active in the agent center, including many excellent productivity tools, such as mind maps, document assistants, schedulers, and more. On the new technology side, the latest version of GLM-4, GL
- AI 1004 2024-06-10 11:44:17
-
- Say goodbye to the 3D Gaussian Splatting algorithm, the spectral pruning Gaussian field SUNDAE with neural compensation is open source
- The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper include Yang Runyi, a master's student at Imperial College London, Zhu Zhenxin, a second-year master's student at Beihang University, Jiang Zhou, a second-year master's student at Beijing Institute of Technology, and fourth-year undergraduate students at Beijing Institute of Technology Ye Baijun, Zhang Yifei, a third-year undergraduate student at the Chinese Academy of Sciences, China Telecom Artificial Intelligence
- AI 983 2024-06-10 11:17:28
-
- Context-augmented AI coding assistant using Rag and Sem-Rag
- Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application
- AI 1125 2024-06-10 11:08:19
-
- Stability AI's open source 47-second audio generation model can generate insects, birds, rock music, and drum beats.
- There is another good news in the field of audio generation: StabilityAI has just announced the launch of the open model StableAudioOpen, which can generate high-quality audio data. Project address: https://huggingface.co/stabilityai/stable-audio-open-1.0 Unlike StabilityAI’s commercial StableAudio product, which generates longer, coherent music tracks of up to three minutes, StableAudioOpen can be created via simple text Provides generation of high-quality audio data up to 47 seconds. This model was created for music production and sound design. It includes drum beats, musical instruments ri
- AI 928 2024-06-10 09:37:36
-
- Step-by-step guide to using Groq Llama 3 70B locally
- Translator | Bugatti Review | Chonglou This article describes how to use the GroqLPU inference engine to generate ultra-fast responses in JanAI and VSCode. Everyone is working on building better large language models (LLMs), such as Groq focusing on the infrastructure side of AI. Rapid response from these large models is key to ensuring that these large models respond more quickly. This tutorial will introduce the GroqLPU parsing engine and how to access it locally on your laptop using the API and JanAI. This article will also integrate it into VSCode to help us generate code, refactor code, enter documentation and generate test units. This article will create our own artificial intelligence programming assistant for free. Introduction to GroqLPU inference engine Groq
- AI 895 2024-06-10 09:16:58
-
- Big model app Tencent Yuanbao is online! Hunyuan is upgraded to create an all-round AI assistant that can be carried anywhere
- On May 30, Tencent announced a comprehensive upgrade of its Hunyuan model. The App "Tencent Yuanbao" based on the Hunyuan model was officially launched and can be downloaded from Apple and Android app stores. Compared with the Hunyuan applet version in the previous testing stage, Tencent Yuanbao provides core capabilities such as AI search, AI summary, and AI writing for work efficiency scenarios; for daily life scenarios, Yuanbao's gameplay is also richer and provides multiple features. AI application, and new gameplay methods such as creating personal agents are added. "Tencent does not strive to be the first to make large models." Liu Yuhong, vice president of Tencent Cloud and head of Tencent Hunyuan large model, said: "In the past year, we continued to promote the capabilities of Tencent Hunyuan large model. In the rich and massive Polish technology in business scenarios while gaining insights into users’ real needs
- AI 714 2024-06-09 22:38:15
-
- Putting the entire earth into a neural network, the Beihang University team launched a global remote sensing image generation model
- Beihang's research team used a diffusion model to "replicate" the Earth? At any location around the world, the model can generate remote sensing images of multiple resolutions, creating rich and diverse "parallel scenes." Moreover, complex geographical features such as terrain, climate, and vegetation have all been taken into consideration. Inspired by Google Earth, Beihang's research team "loaded" satellite remote sensing images of the entire Earth into a deep neural network from an overhead perspective. Based on such a network, the team built MetaEarth, a global top-down visual generation model. MetaEarth has 600 million parameters and can generate remote sensing images with multiple resolutions, unbounded and covering any geographical location around the world. Compared with previous research, the global remote sensing image generation model has
- AI 237 2024-06-09 21:56:30
-
- Meta AI CEO LeCun: Don't pursue an LLM job
- Produced | 51CTO Technology Stack (WeChat ID: blog51cto) At VivaTech, the annual technology conference for startups in Paris, MetaAI CEO Yann LeCun advised students who want to work in the AI ecosystem not to pursue LLM (large language model) Work. If you're interested in building the next generation of AI systems, you don't have to work in LLM. This is a big company thing and you can’t contribute to it," LeCun said at the conference. He also said that people should develop next-generation AI systems that can overcome the limitations of large language models. 1. Stay away from LLMs Interestingly, Discussions about LLM (Large Language Model) alternatives have continued
- AI 715 2024-06-09 20:29:50
-
- [Paper Interpretation] System 2 Attention improves the objectivity and factuality of large language models
- 1. Brief introduction This article briefly introduces the related work of the paper "System2Attention(issomethingyoumightneedtoo)". Soft attention in transformer-based large language models (LLM) can easily incorporate irrelevant information from the context into its underlying representation, which will adversely affect the generation of the next token. To help correct these problems, the paper introduces System2Attention (S2A), which leverages the ability of LLM to reason in natural language and follow instructions to decide what to process. S2A regenerates the input context so that the input context only contains the relevant parts, then
- AI 584 2024-06-09 20:03:51
-
- YOLOv10 is here! True real-time end-to-end target detection
- In the past few years, YOLOs has become a mainstream paradigm in the field of real-time object detection due to its effective balance between computational cost and detection performance. Researchers have conducted in-depth exploration of the structural design, optimization goals, data enhancement strategies, etc. of YOLOs and have made significant progress. However, the post-processing reliance on non-maximum suppression (NMS) hinders end-to-end deployment of YOLOs and negatively impacts inference latency. Furthermore, the design of various components in YOLOs lacks comprehensive and thorough review, resulting in significant computational redundancy and limiting model performance. This results in suboptimal efficiency, and huge potential for performance improvements. In this work, we aim to further advance the performance-efficiency edge of YOLOs from two aspects: post-processing and model architecture.
- AI 831 2024-06-09 17:29:31
-
- Falcon returns after a year! 11 billion parameters and 5.5 trillion tokens, performance surpassing Llama 3
- In the past few days, the world's attention seems to have been attracted by GPT-4o released by OpenAI. At the same time, OpenAI's challengers are also making history simultaneously. Just on May 14, the Technology Innovation Institute (TII) under the Abu Dhabi Advanced Technology Research Council (ATRC) released a new generation of Falcon2 model. Falcon211B has opened access. Falcon211BVLM will open the new generation "Falcon" (Falcon means Falcon) at 12 noon on May 14 to return to the arena. Once launched, it quickly topped the HN hot list. Last year, Falcon shocked everyone when it was first launched, surpassing Llama with a crushing advantage. According to HuggingFace
- AI 1031 2024-06-09 17:25:31
-
- OpenAI CEO responded to the 'hush agreement', the dispute still came over equity interests, Ultraman: It's my fault
- Since the resignation of Ilya and Jan, the head of super alignment, OpenAI is still distraught, and more and more people have resigned, which has also caused more conflicts. Yesterday, the focus of controversy came to a strict "hush agreement". Former OpenAI employee Kelsey Piper broke the news that any employee's onboarding document instructions include: "Within sixty days of leaving the company, you must sign a separation document containing a "general exemption." If you do not complete it within 60 days, Your equity benefits will be cancelled.” This screenshot of the document that caused the controversy prompted OpenAI CEO to quickly respond: We have never recovered anyone’s vested rights. If people do not sign the separation agreement (or do not agree to the non-disparagement agreement),
- AI 810 2024-06-09 17:07:32