Home > Technology peripherals > AI > Add special effects to videos in one sentence; the most complete insect brain map to date

Add special effects to videos in one sentence; the most complete insect brain map to date

WBOY
Release: 2023-04-13 10:19:09
forward
1424 people have browsed it

Directory:


  1. #Composer: Creative and Controllable Image Synthesis with Composable Conditions
  2. Structure and Content-Guided Video Synthesis with Diffusion Models
  3. The connectome of an insect brain
  4. Uncertainty-driven dynamics for active learning of interatomic potentials
  5. Combinatorial synthesis for AI-driven materials discovery
  6. Masked Images Are Counterfactual Samples for Robust Fine -tuning
  7. One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale
  8. ArXiv Weekly Radiostation: NLP, CV, ML and more Selected paper (with audio)

Paper 1: Composer: Creative and Controllable Image Synthesis with Composable Conditions

  • Author: Lianghua Huang et al
  • Paper address: https://arxiv.org/pdf/2302.09778v2.pdf

Abstract: In the field of AI painting, many researchers are working on improving the controllability of the AI ​​painting model, that is, allowing the model to generate Images are more human-friendly. Some time ago, a model called ControlNet pushed this controllability to a new peak. Around the same time, researchers from Alibaba and Ant Group also made results in the same field. This article is a detailed introduction to this result.

Add special effects to videos in one sentence; the most complete insect brain map to date

Recommended: New ideas for AI painting: Domestic open source new model with 5 billion parameters, synthetically Achieve a leap forward in controllability and quality.

Paper 2: Structure and Content-Guided Video Synthesis with Diffusion Models

  • Author :Patrick Esser et al
  • Paper address: https://arxiv.org/pdf/2302.03011.pdf

Abstract: I believe many people have already understood the charm of generative AI technology, especially after experiencing the AIGC outbreak in 2022. Text-to-image generation technology represented by Stable Diffusion was once popular all over the world, and countless users poured in to express their artistic imagination with the help of AI...

Compared with image editing, video Editing is a more challenging topic, requiring synthesizing new actions rather than just modifying the visual appearance, while also maintaining temporal consistency. There are also many companies exploring this track. Some time ago, Google released Dreamix to apply text conditional video diffusion model (VDM) to video editing.

Recently, Runway, a company that participated in the creation of Stable Diffusion, launched a new artificial intelligence model "Gen-1", which uses any style specified by applying text prompts or reference images. Can convert existing videos into new videos. For example, turning "people on the street" into "clay puppets" only requires one line of prompt.

Add special effects to videos in one sentence; the most complete insect brain map to date

# Recommended: Just one sentence or a picture to add special effects, Stable Diffusion company AIGC has played new tricks.

Paper 3: The connectome of an insect brain

  • ##Author: MICHAEL WINDING et al.
  • Paper address: https://www.science.org/doi/10.1126/science.add9330

Abstract: Researchers have completed the most advanced atlas of the insect brain to date, a landmark achievement in neuroscience that brings scientists closer to a true understanding of the mechanisms of thought.

An international team led by Johns Hopkins University and the University of Cambridge has produced an astonishingly detailed map of every neural connection in the brain of a fruit fly larvae, a study that is closely related to the human brain. Quite a prototype scientific model. The research may support future brain research and inspire new machine learning architectures.

Add special effects to videos in one sentence; the most complete insect brain map to date

Recommended: The most complete insect brain map to date, which may inspire new machine learning architectures .

Paper 4: Uncertainty-driven dynamics for active learning of interatomic potentials

  • Author : Maksim Kulichenko et al
  • Paper address: https://www.nature.com/articles/s43588-023-00406-5

Abstract: Machine learning (ML) models, if trained on datasets from high-fidelity quantum simulations, can generate accurate and efficient interatomic potentials. Active learning (AL) is a powerful tool for iteratively generating different data sets. In this approach, the ML model provides uncertainty estimates and its predictions for each new atomic configuration. If the uncertainty estimate exceeds a certain threshold, the configuration is included in the data set.

Recently, researchers from Los Alamos National Laboratory in the United States developed a strategy: Uncertainty-Driven Dynamics of Active Learning (UDD-AL) to achieve faster Discover configurations that meaningfully augment training data sets. UDD-AL modifies the potential energy surfaces used in molecular dynamics simulations to support regions of configuration space where large model uncertainties exist. The performance of UDD-AL is demonstrated on two AL tasks. The figure below shows a comparison of UDD-AL and MD-AL methods for the glycine test case.

Add special effects to videos in one sentence; the most complete insect brain map to date

Recommended: Nature sub-journal | Uncertainty-driven, power for active learning Learn to use automatic sampling.

Paper 5: Combinatorial synthesis for AI-driven materials discovery

  • Author: John M. Gregoire et al
  • Paper address: https://www.nature.com/articles/s44160-023-00251-4

Abstract: Synthesis is the cornerstone of solid-state materials experimentation, and any synthesis technique necessarily involves changing some synthesis parameters, the most common being composition and annealing temperature. Combinatorial synthesis generally refers to automated/parallelized materials synthesis to create collections of materials with systematic variations of one or more synthesis parameters. Artificial intelligence-controlled experimental workflows place new requirements on combinatorial synthesis.

Here, Caltech researchers provide an overview of combinatorial synthesis, envisioning a future of accelerated materials science driven by the co-development of combinatorial synthesis and AI technologies. and established ten metrics to evaluate trade-offs between different technologies, covering speed, scalability, scope, and quality. These metrics help evaluate a technology's suitability for a given workflow and illustrate how advances in combinatorial synthesis will usher in a new era of accelerated materials science. The following are the synthesis indicators and respective evaluations of the combined synthesis platform.

Add special effects to videos in one sentence; the most complete insect brain map to date

# Recommended: Nature Synthesis Review: Combinatorial synthesis driven by artificial intelligence for materials discovery.

Paper 6: Masked Images Are Counterfactual Samples for Robust Fine-tuning

  • Author: Yao Xiao et al
  • Paper address: https://arxiv.org/abs/2303.03052

Abstract: Sun Yat-sen University Human-Computer Intelligence Fusion Laboratory (HCP) has made fruitful achievements in AIGC and multi-modal large models. More than ten papers have been selected for the recent AAAI 2023 and CVPR 2023, ranking among the first echelons of global research institutions. One of the works implemented the use of causal models to significantly improve the controllability and generalization of multi-modal large models in tuning - "Masked Images Are Counterfactual Samples for Robust Fine-tuning".

Add special effects to videos in one sentence; the most complete insect brain map to date

Recommendation: Sun Yat-sen University HCP Laboratory New Breakthrough: Using Causal Paradigm to Upgrade Multi-Model Large state model.

Paper 7: One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale

  • Author: Fan Bao et al
  • Paper address: https://ml.cs.tsinghua.edu.cn/diffusion/unidiffuser.pdf

Abstract: This paper proposes UniDiffuser, a probabilistic modeling framework designed for multi-modality, and adopts the transformer-based network architecture proposed by the team U-ViT trained a model with one billion parameters on the open source large-scale graphic data set LAION-5B, enabling an underlying model to complete a variety of generation tasks with high quality (Figure 1). To put it simply, in addition to one-way text generation, it can also realize multiple functions such as image generation, image and text joint generation, unconditional image and text generation, image and text rewriting, etc., which greatly improves the production efficiency of text and image content, and further improves the generation of text and graphics. The application imagination of formula model.

Add special effects to videos in one sentence; the most complete insect brain map to date

Recommended: Tsinghua Zhu Jun team open sourced the first large multi-modal diffusion model based on Transformer , the interplay of text and pictures, and rewriting won all.

ArXiv Weekly Radiostation

Heart of Machine cooperates with ArXiv Weekly Radiostation initiated by Chu Hang, Luo Ruotian, and Mei Hongyuan. Based on 7 Papers, this selection is More important papers this week, including 10 selected papers in each of NLP, CV, and ML fields, and abstract introductions to the papers in audio format.

This week’s 10 selected NLP papers are:

1. GLEN: General-Purpose Event Detection for Thousands of Types. (from Martha Palmer, Jiawei Han)

2. An Overview on Language Models: Recent Developments and Outlook. (from C.-C. Jay Kuo)

3. Learning Cross-lingual Visual Speech Representations. (from Maja Pantic)

4. Translating Radiology Reports into Plain Language using ChatGPT and GPT-4 with Prompt Learning: Promising Results, Limitations, and Potential. (from Ge Wang)

5. A Picture is Worth a Thousand Words: Language Models Plan from Pixels. (from Honglak Lee)

6. Do Transformers Parse while Predicting the Masked Word?. (from Sanjeev Arora)

7. The Learnability of In-Context Learning . (from Amnon Shashua)

##8. Is In-hospital Meta-information Useful for Abstractive Discharge Summary Generation?. (from Yuji Matsumoto)

9. ChatGPT Participates in a Computer Science Exam. (from Ulrike von Luxburg)

10. Team SheffieldVeraAI at SemEval-2023 Task 3: Mono and multilingual approaches for news genre, topic and persuasion technique classification. (from Kalina Bontcheva)

This week’s 10 CV selected papers are:

1. From Local Binary Patterns to Pixel Difference Networks for Efficient Visual Representation Learning. (from Matti Pietikäinen, Li Liu)

2. Category-Level Multi-Part Multi-Joint 3D Shape Assembly.  (from Wojciech Matusik, Leonidas Guibas)

3. PartNeRF: Generating Part-Aware Editable 3D Shapes without 3D Supervision.  (from Leonidas Guibas)

4. Exploring Recurrent Long-term Temporal Fusion for Multi-view 3D Perception.  (from Xiangyu Zhang)

5. Grab What You Need: Rethinking Complex Table Structure Recognition with Flexible Components Deliberation.  (from Bing Liu)

6. Unified Visual Relationship Detection with Vision and Language Models.  (from Ming-Hsuan Yang)

7. Contrastive Semi-supervised Learning for Underwater Image Restoration via Reliable Bank.  (from Huan Liu)

8. InstMove: Instance Motion for Object-centric Video Segmentation.  (from Xiang Bai, Alan Yuille)

9. ViTO: Vision Transformer-Operator.  (from George Em Karniadakis)

10. A Simple Framework for Open-Vocabulary Segmentation and Detection.  (from Jianfeng Gao, Lei Zhang)

本周 10 篇 ML 精选论文是:

1. Generalizing and Decoupling Neural Collapse via Hyperspherical Uniformity Gap.  (from Bernhard Schölkopf)

2. AutoTransfer: AutoML with Knowledge Transfer -- An Application to Graph Neural Networks.  (from Jure Leskovec)

3. Relational Multi-Task Learning: Modeling Relations between Data and Tasks.  (from Jure Leskovec)

4. Interpretable Outlier Summarization.  (from Samuel Madden)

5. Visual Prompt Based Personalized Federated Learning.  (from Dacheng Tao)

6. Interpretable Joint Event-Particle Reconstruction for Neutrino Physics at NOvA with Sparse CNNs and Transformers.  (from Pierre Baldi)

7. FedLP: Layer-wise Pruning Mechanism for Communication-Computation Efficient Federated Learning.  (from Fei Wang, Khaled B. Letaief)

8. Traffic4cast at NeurIPS 2022 -- Predict Dynamics along Graph Edges from Sparse Node Data: Whole City Traffic and ETA from Stationary Vehicle Detectors.  (from Sepp Hochreiter)

9. Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning.  (from Thomas Hofmann)

10. Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning.  (from Dimitris N. Metaxas)

The above is the detailed content of Add special effects to videos in one sentence; the most complete insect brain map to date. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template