Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 1378. Отображено 188.
19-10-2005 дата публикации

Music analysis

Номер: GB0000518401D0
Автор:
Принадлежит:

Подробнее
23-09-2021 дата публикации

TRANSITION FUNCTIONS OF DECOMPOSED SIGNALS

Номер: US20210294567A1
Принадлежит: ALGORIDDIM GMBH

A device for processing audio signals, including: first and second input units providing first and second input signals of first and second audio tracks, a decomposition unit to decompose the first input audio signal to obtain a plurality of decomposed signals, a playback unit configured to start playback of a first output signal obtained from recombining at least a first decomposed signal at a first volume level with a second decomposed signal at a second volume level, such that the first output signal substantially equals the first input signal, and a transition unit for performing a transition between playback of the first output signal and playback of a second output signal obtained from the second input signal. The transition unit has a volume control section adapted for reducing the first and second volume levels according to first and second transition functions.

Подробнее
28-07-2022 дата публикации

MUSIC RECOMMENDATION FOR INFLUENCING PHYSIOLOGICAL STATE

Номер: US20220233807A1
Принадлежит:

Methods, systems, and devices are provided that allow for identifying the effects of musical selections on various biomarkers in one or more users. Based on the analysis of long- and short-term effects of musical attributes on biomarkers, music selections are recommended to achieve desired physiological effects and results.

Подробнее
11-05-2022 дата публикации

RELATIONS BETWEEN MUSIC ITEMS

Номер: EP3996085A1
Принадлежит:

A method of determining relations between music items, the method comprising determining a first input representation for a symbolic representation of a first music item, mapping the first input representation onto to one or more subspaces derived from a vector space using a first model, wherein each subspace models a characteristic of the music items, determining a second input representation for music data representing a second music item, mapping the second input representation onto the one or more subspaces using a second model, determining a distance between the mappings of the first and second input representation in each subspace, wherein the distance represents the degree of relation between the first and second input representation with respect to the characteristic modelled by the subspace.

Подробнее
19-08-2020 дата публикации

Automated music production

Номер: GB0002581319A
Принадлежит:

A computer implemented method, preferably using artificial intelligence, of rendering music into audio format comprising receiving user-defined music production parameters such as tempo, duration and musical intensity and using them to produce a custom music piece in digital musical notation format (e.g. MIDI). The custom music piece is rendered into audio format for output to the user. Prior to the rendering step being completed, a preview audio render is created using pre-generated music segments stored in audio format. The segments have been generated by producing multiple sections of music according to different predetermined music production parameters. The segments are stored with associated metadata indicating the production parameters used to produce them and the preview audio render is created by matching sections of the custom music piece to different ones of the pre-generated music segments, based on the user-defined production parameters and the metadata, and sequencing the ...

Подробнее
28-01-1998 дата публикации

Determining the pitch of string instruments

Номер: GB0009725301D0
Автор:
Принадлежит:

Подробнее
15-03-2018 дата публикации

GENERATING AUDIO USING NEURAL NETWORKS

Номер: CA0003036067A1
Принадлежит: GOWLING WLG (CANADA) LLP

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.

Подробнее
17-12-2015 дата публикации

METHOD FOR FOLLOWING A MUSICAL SCORE AND ASSOCIATED MODELLING METHOD

Номер: CA0002950816A1
Принадлежит:

La présente invention se rapporte à un procédé de suivi d'une partition musicale (10) comportant les étapes suivantes effectuées en temps réel : enregistrement (23) d'au moins un son (12) émis par un interprète, estimation (24) d'au moins un vecteur chromatique (Vx), comparaison (26) dudit vecteur chromatique (Vx) avec des vecteurs chromatiques théoriques de ladite partition musicale (10), comparaison (27) d'une transition (Tx) entre ledit vecteur chromatique (Vx) et un précédent vecteur chromatique (Vx-1) avec des transitions théoriques de ladite partition musicale (10), et estimation (28) d'une position de travail (Px) de l'interprète en fonction d'une position de travail précédente (Px-1), de la comparaison (26) dudit vecteur chromatique (Vx) et de la comparaison (27) de ladite transition (Tx), l'étape d'enregistrement (23) étant réalisée sur une durée (Di) adaptée en fonction du rapport entre une durée (Dx) de ladite transition (Tx) et une durée de référence (Dref).

Подробнее
30-01-2012 дата публикации

СПОСОБ АНАЛИЗА ЦИФРОВОГО МУЗЫКАЛЬНОГО АУДИОСИГНАЛА

Номер: EA201170559A1
Принадлежит:

Настоящее изобретение относится к способу представления музыкального аудиосигнала для анализа музыкального аудиосигнала (2) с целью выделения набора профилей семейств аккордов (CFP), содержащихся в музыкальном аудиосигнале (2), причем способ включает следующие шаги: а) применяют первый алгоритм (4) к музыкальному аудиосигналу (2) с целью выделения первичных данных (5), репрезентативных для тональности музыкального аудиосигнала (2), и b) применяют второй алгоритм (6) к указанным первичным данным (5) для получения вторичных данных (7), репрезентативных для тонального центра, содержащегося в первичных данных (5).

Подробнее
03-01-2002 дата публикации

USING A SYSTEM FOR PREDICTION OF MUSICAL PREFERENCES FOR THE DISTRIBUTION OF MUSICAL CONTENT OVER CELLULAR NETWORKS

Номер: WO0000201439A2
Автор: GANG, Dan, LEHMANN, Daniel
Принадлежит:

Подробнее
21-11-2002 дата публикации

CONTENT IDENTIFIERS TRIGGERING CORRESPONDING RESPONSES

Номер: WO0002093823A1
Принадлежит:

Fingerprint data derived from audio or other content is used as an identifier, to trigger machine responses corresponding to the content. The fingerprint can be derived from the content, and also separately encoded in a file header. Digital watermarks can also be similarly used.

Подробнее
28-05-2019 дата публикации

Generating audio using neural networks

Номер: US0010304477B2

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.

Подробнее
12-01-2017 дата публикации

Method for Distinguishing Components of an Acoustic Signal

Номер: US20170011741A1

A method distinguishes components of a signal by processing the signal to estimate a set of analysis features, wherein each analysis feature defines an element of the signal and has feature values that represent parts of the signal, processing the signal to estimate input features of the signal, and processing the input features using a deep neural network to assign an associative descriptor to each element of the signal, wherein a degree of similarity between the associative descriptors of different elements is related to a degree to which the parts of the signal represented by the elements belong to a single component of the signal. The similarities between associative descriptors are processed to estimate correspondences between the elements of the signal and the components in the signal. Then, the signal is processed using the correspondences to distinguish component parts of the signal.

Подробнее
30-04-2020 дата публикации

METHOD AND APPARATUS FOR CORRECTING DELAY BETWEEN ACCOMPANIMENT AUDIO AND UNACCOMPANIED AUDIO, AND STORAGE MEDIUM

Номер: US20200135156A1
Принадлежит:

A method and apparatus for correcting a delay between accompaniment audio and unaccompanied audio, and a storage medium are provided. The method includes: acquiring original audio of a target song, and extracting original vocal audio from the original audio; determining a first delay between the original vocal audio and the unaccompanied audio, and determining a second delay between the accompaniment audio and the original audio; and correcting a delay between the accompaniment audio and the unaccompanied audio based on the first delay and the second delay. Thus, the correction efficiency of the delay between accompaniment audio and unaccompanied audio is improved, and correction mistakes possibly caused by human factors are eliminated, thereby improving the accuracy.

Подробнее
09-11-2023 дата публикации

CONVERTING AUDIO SAMPLES TO FULL SONG ARRANGEMENTS

Номер: US20230360620A1
Принадлежит:

In examples, a method for converting audio samples to full song arrangements is provided. The method includes receiving audio sample data, determining a melodic transcription, based on the audio sample data, and determining a sequence of music chords, based on the melodic transcription. The method further includes generating a full song arrangement, based on the sequence of music chords, and the audio sample data.

Подробнее
06-06-2007 дата публикации

Song search system and song search method

Номер: EP0001587003A3
Автор: Urata, Shigefumi
Принадлежит:

The present invention is a song map that is self-organized map that comprises a plurality of neurons that include characteristic vectors made up of data corresponding to a plurality of evaluation items that indicate the characteristics of the song data, and the index-evaluation items are preset from among the evaluation items have a trend from one end to the other end, and song data is mapped for some of the neurons of the song map, and the status of the song map is displayed by points that correspond to respective neurons of song map. In the song map, values that decrease going from one end to the other end are learned as the initial values for the index-evaluation items.

Подробнее
13-07-2022 дата публикации

AUDIO STEM IDENTIFICATION SYSTEMS AND METHODS

Номер: EP3796306B1
Принадлежит: Spotify AB

Подробнее
10-01-1997 дата публикации

PLAYING POSITION DETECTING METHOD AND PITCH DETECTING METHOD

Номер: JP0009006339A
Автор: ANDREAS SALLAI
Принадлежит:

PURPOSE: To detect a playing position and also to accurately and quickly detect a pitch. CONSTITUTION: A 1st pitch detection part 13 detects the pitch quickly by making good use of a neural net 15 and also detects the playing position. A 2nd pitch detection part 12 detects the accurate pitch from a zero-cross point. A comparison part 17 outputs the pitch which is detected early and supplies it to a QUANTIZER 18. Playing position data and pitch data quantized by the QUANTIZER 18 are supplied to a MIDI output part 19 and converted into MIDI data, which are supplied to a sound source(T.G.) 20. COPYRIGHT: (C)1997,JPO ...

Подробнее
14-03-2007 дата публикации

Analysis and transcription of music

Номер: GB0002430073A
Принадлежит:

A method of transcribing music 121 produces a transcript 113 which comprises a sequence of symbols that represents the music 121. Data representing sound events (201a-e) is received and a model 112 is accessed to associate the sound events (201a-e) with appropriate transcription symbols. The model 112 comprises transcription symbols and also decision criteria that are used to determine which transcription symbol is appropriate for a particular sound event (210a-e). The model 112 may associate each of the sound events (201a-e) in the music 121 with; a leaf node (504a-h) of a classification tree (500), patterns of activated nodes in a neural net (900), or cluster centres of a cluster model.

Подробнее
30-01-2019 дата публикации

Automated music production

Номер: GB0201820266D0
Автор:
Принадлежит:

Подробнее
03-07-1995 дата публикации

Signal-analysis device with at least one tensioned string and a receiver

Номер: AU0001067495A
Принадлежит:

Подробнее
01-02-2019 дата публикации

Audio signal scoring method, device, electronic equipment and computer storage medium

Номер: CN0109300485A
Автор: DENG FENG, LI YAN, JIANG TAO
Принадлежит:

Подробнее
26-12-2019 дата публикации

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

Номер: US20190392807A1
Принадлежит: CASIO COMPUTER CO., LTD.

An electronic musical instrument includes: a memory that stores a machine-learning trained acoustic model mimicking voice of a singer and at least one processor. When a vocoder mode is on, prescribed lyric data and pitch data corresponding to a user operation of an operation element of the musical instrument are inputted to the trained acoustic model, and inferred singing voice data that infers a singing voice of the singer is synthesized on the basis of acoustic feature data output by the trained acoustic model and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element. When the vocoder mode is off, the inferred singing voice data is synthesized based on the acoustic feature data without using the sound waveform data. 1. An electronic musical instrument comprising:a plurality of operation elements respectively corresponding to mutually different pitch data;a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data including training lyric data and training pitch data, and on training singing voice data of a singer corresponding to the training musical score data, the trained acoustic model being configured to receive lyric data and prescribed pitch data and output acoustic feature data of a singing voice of the singer in response to the received lyric data and pitch data; andat least one processor in which a first mode and a second mode are interchangeably selectable, in accordance with a user operation on an operation element in the plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, and', 'digitally synthesizes and outputs inferred ...

Подробнее
02-02-2017 дата публикации

Utilizing Athletic Activities to Augment Audible Compositions

Номер: US20170031650A1
Принадлежит: Nike Inc

Example embodiments relate to methods and systems for playback of adaptive music corresponding to an athletic activity. A user input is received from a user selecting an existing song for audible playback to the user, the song comprising a plurality of audio layers including at least a first layer, a second layer, and a third layer. Augmented playback of the existing song to the user is initiated by audibly providing the first layer but not the second layer. Physical activity information derived from a sensor corresponding to a real-time physical activity level of a user is received. If the physical activity level of the user is above a first activity level threshold, the augmented playback of the existing song is continued by audibly providing the first layer and the second layer to the user.

Подробнее
27-04-2021 дата публикации

Singing voice separation with deep U-Net convolutional networks

Номер: US0010991385B2
Принадлежит: Spotify AB, SPOTIFY AB

A system, method and computer product for estimating a component of a provided audio signal. The method comprises converting the provided audio signal to an image, processing the image with a neural network trained to estimate one of vocal content and instrumental content, and storing a spectral mask output from the neural network as a result of the image being processed by the neural network. The neural network is a U-Net. The method also comprises providing the spectral mask to a client media playback device, which applies the spectral mask to a spectrogram of the provided audio signal, to provide a masked spectrogram. The media playback device also transforms the masked spectrogram to an audio signal, and plays back that audio signal via an output user interface.

Подробнее
27-05-2021 дата публикации

METHODS AND APPARATUS FOR AUDIO EQUALIZATION BASED ON VARIANT SELECTION

Номер: US20210158148A1
Принадлежит:

Methods, apparatus, systems and articles of manufacture are disclosed methods and apparatus for audio equalization based on variant selection. An example apparatus includes a processor to obtain training data, the training data including a plurality of reference audio signals each associated with a variant of music and organize the training data into a plurality of entries based on the plurality of reference audio signals, a training model executor to execute a neural network model using the training data, and a model trainer to train the neural network model by updating at least one weight corresponding to one of the entries in the training data when the neural network model does not satisfy a training threshold.

Подробнее
11-10-2022 дата публикации

Electronic musical instrument, electronic musical instrument control method, and storage medium

Номер: US0011468870B2
Принадлежит: CASIO COMPUTER CO., LTD.

An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.

Подробнее
02-04-2024 дата публикации

Processing sequences using convolutional neural networks

Номер: US0011948066B2
Принадлежит: DeepMind Technologies Limited

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing sequences using convolutional neural networks. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.

Подробнее
15-03-2007 дата публикации

MUSIC ANALYSIS

Номер: WO000002007029002A2
Принадлежит:

There is disclosed an analyser (101) for building a transcription model (112; 500) using a training database (111) of music. The analyser (101) decomposes the training music (111) into sound events (201a-e) and, in one embodiment, allocates the sound events to leaf nodes (504a-h) of a tree (500). There is also disclosed a transcriber (102) for transcribing music (121) into a transcript (113). The transcript (113) is sequence of symbols that represents the music (121), where each symbol is associated with a sound event in the music (121) being transcribed. In one embodiment, the transcriber (102) associates each of the sound events (201a-e) in the music (121) with a leaf node (504a-h) of a tree (500); in this embodiment the transcript (113) is a list of the leaf nodes (504a-h). The transcript (113) preserves information regarding the sequence of the sound events (201a-e) in the music (121) being transcribed.

Подробнее
04-09-2018 дата публикации

Generating music with deep neural networks

Номер: US0010068557B1
Принадлежит: Google LLC

The present disclosure provides systems and methods that include or otherwise leverage a machine-learned neural synthesizer model. Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, the neural synthesizer model can use deep neural networks to generate sounds at the level of individual samples. Learning directly from data, the neural synthesizer model can provide intuitive control over timbre and dynamics and enable exploration of new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer. As one example, the neural synthesizer model can be a neural synthesis autoencoder that includes an encoder model that learns embeddings descriptive of musical characteristics and an autoregressive decoder model that is conditioned on the embedding to autoregressively generate musical waveforms that have the musical characteristics one audio sample at a time.

Подробнее
10-03-2020 дата публикации

Chord estimation method and chord estimation apparatus

Номер: US0010586519B2
Принадлежит: Yamaha Corporation, YAMAHA CORP

A chord estimation apparatus estimates a first chord from an audio signal, and estimates a second chord by inputting the estimated first chord to a trained model that has learned a chord modification tendency.

Подробнее
17-08-2023 дата публикации

RELATIONS BETWEEN MUSIC ITEMS

Номер: US20230260488A1
Принадлежит: Spotify AB

A method of determining relations between music items, wherein a music item is a submix of a musical composition comprising one or more music tracks, the method comprising determining a first input representation for at least part of a first music item, mapping the first input representation onto to one or more subspaces derived from a vector space using a first model, wherein each subspace models a characteristic of the music items, determining a second input representation for at least part of a second music item, mapping the second input representation onto the one or more subspaces using a second model, and determining a distance between the mappings of the first and second input representations in each subspace, wherein the distance represents the degree of relation between the first and second input representations with respect to the characteristic modelled by the subspace.

Подробнее
07-05-2024 дата публикации

System and method for evaluating semantic closeness of data files

Номер: US0011977845B2
Принадлежит: EMOTIONAL PERCEPTION AI LIMITED

The invention provides for the evaluation of semantic closeness of a source data file relative to candidate data files. The system includes an artificial neural network and processing intelligence that derives a property vector from extractable measurable properties of a data file. The property vector is mapped to related semantic properties for that same data file and such that, during ANN training, pairwise similarity/dissimilarity in property is mapped, during towards corresponding pairwise semantic similarity/dissimilarity in semantic space to preserve semantic relationships. Based on comparisons between generated property vectors in continuous multi-dimensional property space, the system and method assess, rank, and then recommend and/or filter semantically close or semantically disparate candidate files from a query from a user that includes the data file. Applications of the categorization and recommendation system apply to search tools, including identification of illicit materials ...

Подробнее
22-06-1995 дата публикации

Signalanalyseeinrichtung

Номер: DE0004343411A1
Принадлежит:

Подробнее
06-09-2000 дата публикации

Determining the pitch of string instruments

Номер: GB0002319884B
Принадлежит: BLUE CHIP MUSIC GMBH, * BLUE CHIP MUSIC GMBH

Подробнее
11-11-2020 дата публикации

Efficient circuit for sampling

Номер: GB0202015208D0
Автор:
Принадлежит:

Подробнее
22-08-2000 дата публикации

SIGNAL-ANALYSIS DEVICE WITH AT LEAST ONE TENSIONED STRING AND A RECEIVER

Номер: CA0002174223C
Принадлежит: Blue Chip Music GmbH, Yamaha Corporation

Described is a signal-analysis device (1) with at least one tensioned string (E1, H2, G3, D4, A5, E6) whose oscillating length can be varied by pressing the string against at least one tie-bar, the device also having a receiver (2) and an evaluation unit (3 to 9) connected to the receiver. The aim of the invention is to provide a guitar synthesizer which provides the desired note data relatively rapidly after stimulation of the string. This is achieved by vertue of the fact that the evaluation unit (3 to 9) detects pulses or pulse groups which, following stimulation of the string (E1, H2, G3, D4, A5, E6), pass along the string past the receiver (2), the evaluation unit generating from the sequence of pulses or pulse groups a signal which represents a note.

Подробнее
26-03-2020 дата публикации

Apparatus for automatically generating music based on neural network and method thereof

Номер: KR0102093233B1
Принадлежит:

Подробнее
18-03-2021 дата публикации

CREATIVE GAN GENERATING MUSIC DEVIATING FROM STYLE NORMS

Номер: US20210082169A1

A method and system for generating music uses artificial intelligence to analyze existing musical compositions and then creates a musical composition that deviates from the learned styles. Known musical compositions created by humans are presented in digitized form along with a style designator to a computer for analysis, including recognition of musical elements and association of particular styles. A music generator generates a draft musical composition for similar analysis by the computer. The computer ranks such draft musical composition for correlation with known musical elements and known styles. The music generator modifies the draft musical composition using an iterative process until the resulting musical composition is recognizable as music but is distinctive in style.

Подробнее
20-04-2023 дата публикации

SUPERVISED METRIC LEARNING FOR MUSIC STRUCTURE FEATURES

Номер: US20230121764A1
Принадлежит:

Devices, systems, and methods related to implementing supervised metric learning during a training of a deep neural network model are disclosed herein. In examples, audio input may be received, where the audio input includes a plurality of song fragments from a plurality of songs. For each song fragment, an aligning function may be performed to center the song fragment based on determined beat information, thereby creating a plurality of aligned song fragments. For each song fragment of the plurality of song fragments, an embedding vector may be obtained from the deep neural network. Thus, a batch of aligned song fragments from the plurality of aligned song fragments may be selected, such that a training tuple may be selected. A loss metric may be generated based on the selected training tuple and one or more weights of the deep neural network model may be updated based on the loss metric.

Подробнее
19-10-2005 дата публикации

Song search system and song search method

Номер: EP0001587003A2
Автор: Urata, Shigefumi
Принадлежит:

The present invention is a song map that is self-organized map that comprises a plurality of neurons that include characteristic vectors made up of data corresponding to a plurality of evaluation items that indicate the characteristics of the song data, and the index-evaluation items are preset from among the evaluation items have a trend from one end to the other end, and song data is mapped for some of the neurons of the song map, and the status of the song map is displayed by points that correspond to respective neurons of song map. In the song map, values that decrease going from one end to the other end are learned as the initial values for the index-evaluation items.

Подробнее
06-04-2017 дата публикации

MACHINES, SYSTEMS AND PROCESSES FOR AUTOMATED MUSIC COMPOSITION AND GENERATION EMPLOYING LINGUISTIC AND/OR GRAPHICAL ICON BASED MUSICAL EXPERIENCE DESCRIPTORS

Номер: CA0002999777A1
Принадлежит:

Automated music composition and generation machines, engines, systems and methods, and architectures that allow anyone, including music composing robotic systems, without possessing any knowledge of music theory or practice, or expertise in music or other creative endeavors, to instantly create unique and professional-quality music, synchronized to any kind of media content, including, but not limited to, video, photography, slideshows, and any pre-existing audio format, as well as any object, entity, and/or event, wherein the system user only requires knowledge of ones own emotions and/or artistic concepts which are to be expressed musically in a piece of music that will ultimately composed by the automated composition and generation system of the present invention.

Подробнее
15-06-2023 дата публикации

Utilizing Athletic Activities to Augment Audible Compositions

Номер: US20230187050A1
Принадлежит:

Example embodiments relate to methods and systems for playback of adaptive music corresponding to an athletic activity. A user input is received from a user selecting an existing song for audible playback to the user, the song comprising a plurality of audio layers including at least a first layer, a second layer, and a third layer. Augmented playback of the existing song to the user is initiated by audibly providing the first layer but not the second layer. Physical activity information derived from a sensor corresponding to a real-time physical activity level of a user is received. If the physical activity level of the user is above a first activity level threshold, the augmented playback of the existing song is continued by audibly providing the first layer and the second layer to the user.

Подробнее
18-12-1996 дата публикации

Synthesizer detecting pitch and plucking point of stringed instrument to generate tones

Номер: EP0000749107A3
Принадлежит:

Подробнее
05-04-2023 дата публикации

AUTONOMOUS GENERATION OF MELODY

Номер: EP3803846B1
Принадлежит: Microsoft Technology Licensing, LLC

Подробнее
07-06-2023 дата публикации

LATENT-SPACE REPRESENTATIONS OF AUDIO SIGNALS FOR CONTENT-BASED RETRIEVAL

Номер: EP4189670A1
Принадлежит:

Подробнее
19-12-2019 дата публикации

Generating audio using neural networks

Номер: AU2017324937B2

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.

Подробнее
15-03-2007 дата публикации

MUSIC ANALYSIS

Номер: CA0002622012A1
Принадлежит: Individual

There is disclosed an analyser (101) for building a transcription model (112; 500) using a training database (111) of music. The analyser (101) decomposes the training music (111) into sound events (201a-e) and, in one embodiment, allocates the sound events to leaf nodes (504a-h) of a tree (500). There is also disclosed a transcriber (102) for transcribing music (121) into a transcript (113). The transcript (113) is sequence of symbols that represents the music (121), where each symbol is associated with a sound event in the music (121) being transcribed. In one embodiment, the transcriber (102) associates each of the sound events (201a-e) in the music (121) with a leaf node (504a-h) of a tree (500); in this embodiment the transcript (113) is a list of the leaf nodes (504a-h). The transcript (113) preserves information regarding the sequence of the sound events (201a-e) in the music (121) being transcribed.

Подробнее
26-06-2018 дата публикации

SYSTEM FOR COMPOSING MUSIC BY USING ARTIFICIAL INTELLIGENCE AND METHOD THEREOF

Номер: KR1020180070340A
Принадлежит:

When receiving a composition request input from a user, harmony progression and melody progression of a music are configured through an artificial neural network. Arrangement is performed on the result of the harmony progression and the melody progression. The result of the arrangement is rendered to create a music file. The music file is provided to a user. The music can be composed at low costs. COPYRIGHT KIPO 2018 (110) Input/output unit (120) Communication unit (130) Storage unit (140) Harmony creation unit (150) Melody creation unit (160) Arrangement unit (170) Rendering unit (180) Control unit ...

Подробнее
30-03-2021 дата публикации

Method and apparatus for correcting delay between accompaniment audio and unaccompanied audio, and storage medium

Номер: US0010964301B2

A method and apparatus for correcting a delay between accompaniment audio and unaccompanied audio, and a storage medium are provided. The method includes: acquiring original audio of a target song, and extracting original vocal audio from the original audio; determining a first delay between the original vocal audio and the unaccompanied audio, and determining a second delay between the accompaniment audio and the original audio; and correcting a delay between the accompaniment audio and the unaccompanied audio based on the first delay and the second delay. Thus, the correction efficiency of the delay between accompaniment audio and unaccompanied audio is improved, and correction mistakes possibly caused by human factors are eliminated, thereby improving the accuracy.

Подробнее
17-10-2017 дата публикации

Music modeling

Номер: US0009792889B1

A computer implemented method is provided for generating a prediction of a next musical note by a computer having at least a processor and a memory. A computer processor system is also provided for generating a prediction of a next musical note. The method includes storing sequential musical notes in the memory. The method further includes generating, by the processor, the prediction of the next musical note based upon a music model and the sequential musical notes stored in the memory. The method also includes updating, by the processor, the music model based upon the prediction of the next musical note and an actual one of the next musical note. The method additionally includes resetting, by the processor, the memory at fixed time intervals.

Подробнее
03-06-1998 дата публикации

Method and apparatus for determining the pitch of a stringed instrument

Номер: GB0002319884A
Принадлежит:

The pitch of a stringed instrument which is excited by plucking or striking a string (3, 4, Fig.1) is determined by way of the vibration of the string being converted by a transducer into an electrical signal. The transducer is a pressure transducer, e.g. piezo-electric elements (7, 8, Fig.1). The electrical signal output from the transducer is digitised (43) and then subjected to differentiation (411). The differentiated signal is then further processed by devices such as a microprocessor (44) and a neural network (410) before being passed on to a MIDI interface (49).

Подробнее
22-04-2010 дата публикации

METHOD FOR ANALYZING A DIGITAL MUSIC AUDIO SIGNAL

Номер: CA0002740638A1
Принадлежит:

The present invention concerns a music audio representation method for analyzing a music audio signal (2) in order to extract a set of Chord Family Profiles (CFP) contained in the audio music signal (2), the method comprising the steps of: a) applying a first algorithm (4) to the music audio signal (2) in order to extract first data (5) representative of the tonality of music audio signal (2), and b) applying a second algorithm (6) to said first data (5) in order to provide second data (7) representative of the tonal centre contained in the first data (5).

Подробнее
07-02-2020 дата публикации

System and method for reproducing sound of orchestra

Номер: CN0110770817A
Принадлежит:

Подробнее
24-04-2019 дата публикации

Номер: KR1020190042730A
Автор:
Принадлежит:

Подробнее
25-09-1997 дата публикации

MUSIC COMPOSITION

Номер: WO1997035299A1
Принадлежит:

A music composition system (1), comprising receiving a first harmony including a first melody, analyzing the first harmony to derive in real-time a rule relating the first melody to the first harmony, receiving a second melody, and applying the rule in real-time to the second melody to produce a second harmony relating to the second melody.

Подробнее
05-05-2005 дата публикации

Song search system and song search method

Номер: US20050092161A1
Автор: Shigefumi Urata
Принадлежит: Sharp Kabushiki Kaisha

A characteristic-data-extraction unit 13 extracts characteristic data containing changing information from song data, then an impression-data-conversion unit 14 uses a pre-learned hierarchical neural network to convert the characteristic data extracted by the characteristic-data-extraction unit 13 to impression data and stores it together with song data into a song database 15. A song search unit 18 searches the song database 15 based on impression data input from a PC-control unit 19, and outputs the search results to a search-results-output unit 21.

Подробнее
28-05-2020 дата публикации

METHOD OF AUTOMATICALLY CONFIRMING THE UNIQUENESS OF DIGITAL PIECES OF MUSIC PRODUCED BY AN AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM WHILE SATISFYING THE CREATIVE INTENTIONS OF SYSTEM USERS

Номер: US20200168189A1
Принадлежит: Amper Music, Inc.

A method of automatically confirming the uniqueness of digital pieces of music produced by an automated music composition and generation system while satisfying the creative intentions of system users. The method involves reviewing, selecting and providing one or more musical experience descriptors and time and/or space parameters, to an automated music composition and generation engine operably connected to the system user interface. The automated music composition and generation engine includes a music piece analysis subsystem for automatically examining each piece of composed music that has been generated by said automated music composition and generation engine, comparing the digital piece of composed and generated music against other digital pieces of music composed and generated by said automated music composition and generation system for said system user, and determining whether or not the examined digital piece of composed and generated music is sufficiently unique. Also, the method automatically confirms with the system user that each examined digital piece of composed and generated music satisfies the creative intentions of the system user. 1. A method of automatically confirming the uniqueness of digital pieces of music produced by an automated music composition and generation system while satisfying the creative intentions of system users , said method comprising the steps of:(a) providing an automated music composition and generation system having an automated music composition and generation engine operably connected to a system user interface;(b) using said system user interface to review and select one or more musical experience descriptors as well as time and/or space parameters;(c) providing said musical experience descriptors and time and/or space parameters to said automated music composition and generation engine for processing and automated composition and generation one or more digital pieces of music in response to said musical experience ...

Подробнее
19-05-2020 дата публикации

Enhancements for musical composition applications

Номер: US0010657934B1
Принадлежит: Electronic Arts Inc., ELECTRONIC ARTS INC

Systems and methods are provided for enhancements for musical composition applications. An example method includes receiving information identifying initiation of a music composition application, the music composition application being executed via a user device of a user, with the received information indicating a genre associated with a musical score being created via the music composition application. One or more constraints associated with the genre are determined, with the constraints indicating one or more features learned based on analyzing music associated with the genre. Musical elements specified by the user are received via the music composition application. Musical score updates are determined based on the musical elements and genre. The determined musical score updates are provided to the user device.

Подробнее
23-06-2022 дата публикации

SYSTEMS AND METHODS FOR CAPTURING AND INTERPRETING AUDIO

Номер: US20220199059A1
Принадлежит:

A device is provided for capturing vibrations produced by an object such as a musical instrument such as a drum head of a drum kit. The device comprises a detectable element, such as a ferromagnetic element, such as a metal shim and a sensor spaced apart from and located relative to the musical instrument. The detectable element is located between the sensor and the musical instrument. When the musical instrument vibrates, the sensor remains stationary and the detectable element is vibrated relative to the sensor by the musical instrument.

Подробнее
08-07-2021 дата публикации

Datenverarabeitungsverfahren, Datenverarbeitungsvorrichtung und Datenverarbeitungsprogramm

Номер: DE112019005226T5
Принадлежит: SONY CORP, SONY Corporation

Diese Datenverarbeitungsvorrichtung (100) umfasst Folgendes: eine Extraktionseinheit (131), die erste Daten aus Elementen, die einen ersten Inhalt bilden, extrahiert; und eine Modellerzeugungseinheit (132), die trainierte Modelle erzeugt, die einen ersten Codierer (50), der einen ersten Merkmalswert, der der Merkmalswert für den ersten Inhalt ist, berechnet, und einen zweiten Codierer (55), der einen zweiten Merkmalswert, der ein Merkmalswert für die extrahierten ersten Daten ist, berechnet, besitzen.

Подробнее
14-01-2015 дата публикации

Real-time audio manipulation

Номер: GB0201421513D0
Автор:
Принадлежит:

Подробнее
15-05-1997 дата публикации

Control structure for sound synthesis

Номер: AU0007463696A
Принадлежит:

Подробнее
01-07-2021 дата публикации

Singing assisting system, singing assisting method, and non-transitory computer-readable medium comprising instructions for executing the same

Номер: TW202125498A
Принадлежит:

A singing assisting system, a singing assisting method, and a non-transitory computer-readable medium including instructions for executing the method are provided. When the performed singing track does not appear in an ought-to-be-performed period, a singing-continuing procedure is executed. When the performed singing track is off pitch, a pitch adjustment procedure is executed.

Подробнее
18-08-1992 дата публикации

Rhythm pattern learning apparatus

Номер: US0005138928A
Автор:
Принадлежит:

A rhythm pattern generating apparatus is provided having a layered neural network to perform learning with feedback to generate an output pattern signal indicative of a musical sound pattern. The output pattern signal is generated by the layered neural network with feedback in response to a performance operation of a player. The layered neural network generates the output pattern signal indicative of the musical sound pattern based on both an input pattern signal and a weight signal. The output pattern signal is fed back by the feedback circuit to the layered neural network to perform the learning process. A drum pad can be used to provide an input to the rhythm pattern generating apparatus or, specifically, to gate an input pattern selector for selecting input pattern signals. The layered neural network with the feedback can perform the learning process using a back propagation method. In the present invention, when a new rhythm pattern is input by a musician, an output pattern signal ...

Подробнее
17-12-2020 дата публикации

INFORMATION PROCESSING METHOD AND APPARATUS FOR PROCESSING PERFORMANCE OF MUSICAL PIECE

Номер: US20200394989A1
Принадлежит:

Provided is an information processing apparatus that generates various kinds of time series data according to a performance tendency of a user. The information processing apparatus includes an index specifying unit that specifies performance tendency information that indicates a performance tendency of a performance of a musical piece by a user by inputting observational performance data X representing the performance to a learned model La and an information processing unit that generates time series data Z regarding the musical piece according to the performance tendency information. 1. An information processing method comprising:generating performance tendency information indicating a performance tendency of a performance of a musical piece by a user from observational performance data representing the performance input to a learned model; andgenerating time series data of the musical piece according to the generated performance tendency information.2. The information processing method according to claim 1 , wherein:the performance tendency information includes an index value indicating a probability of the performance corresponding to a performance tendency, from among a plurality of mutually different performance tendencies, andthe generating of the time series data generates the time series data from a plurality of sets of basic data respectively corresponding to the plurality of mutually different performance tendencies, according to the respective index values of the plurality of mutually different performance tendencies.3. The information processing method according to claim 1 , wherein:the generating of the performance tendency information generates the performance tendency information for each of a plurality of analysis periods on a time axis, using the observational performance data within the respective analysis period, andthe generating of the time series data generates, for each of the plurality of analysis periods, a portion of the time series data ...

Подробнее
28-07-2022 дата публикации

ELECTRONIC MUSICAL INSTRUMENT, CONTROL METHOD FOR ELECTRONIC MUSICAL INSTRUMENT, AND STORAGE MEDIUM

Номер: US20220238088A1
Автор: Makoto DANJYO
Принадлежит: CASIO COMPUTER CO., LTD.

An electronic musical instrument includes a performance operator and at least one processor. In accordance with pitch data associated with the performance operator operated by a user, the at least one processor digitally synthesizes and outputs inferential musical sound data including inferential performance technique of a player. The inferential performance technique of the player is based on acoustic feature data output by a trained acoustic model obtained by performing machine learning on: a training score data set including training pitch data; and a training performance data set obtained by the player playing a musical instrument and is not played in the user operation of the performance operator.

Подробнее
18-01-2023 дата публикации

DETERMINING RELATIONS BETWEEN MUSIC ITEMS

Номер: EP3996084B1
Принадлежит: Spotify AB

Подробнее
22-06-1995 дата публикации

Signal-Analysis Device with at Least One Tensioned String and a Receiver

Номер: CA0002174223A1
Принадлежит:

Подробнее
15-05-2003 дата публикации

System and method for prediction of musical preferences

Номер: US2003089218A1
Автор:
Принадлежит:

A system and a method for predicting the musical taste and/or preferences of the user The present invention receives, on one hand, ratings of a plurality of songs from the user and/or other information about the taste of the user, and on the other hand information about the songs in the catalog from which recommendations are to be given The method then combines both types of information in order to determine the musical preferences of the user. These preferences are then matched to at least one musical selection, which is predicted to be preferred by the user ...

Подробнее
27-07-1999 дата публикации

Method and apparatus of pitch recognition for stringed instruments and storage medium having recorded on it a program of pitch recognition

Номер: US0005929360A1
Автор: Szalay; Andreas
Принадлежит: BlueChip Music GmbH, Yamaha Corporation

There is provided a method of determining the pitch in string instruments that are excited by plucking or striking, wherein a vibration of a string is converted by a transducer into an electrical signal and the electrical signal is evaluated. Up to now, the transducers have primarily been electromagnetic transducers for which a plurality of evaluation algorithms and methods are available. Now however, one also wants to be able to use pressure transducers without having to use new evaluation algorithms and methods. To this end, a pressure transducer is used as the transducer and the electrical signal is subjected to differentiation with respect to time.

Подробнее
10-02-1998 дата публикации

Synthesizer detecting pitch and plucking point of stringed instrument to generate tones

Номер: US5717155A
Автор:
Принадлежит:

In an electronic musical apparatus having an acoustic instrument manually operable to commence an acoustic vibration and a tone generator responsive to the acoustic vibration to generate a musical tone having a pitch corresponding to that of the acoustic vibration, a pitch detecting device utilizes a pickup for picking up the acoustic vibration to convert the same into a waveform signal. Further, a first detector operates according to a fast algorithm for processing the waveform signal so as responsively produce a first output representative of the pitch of the acoustic vibration, and a second detector operates in parallel to the first detector for processing the same waveform signal according to a slow algorithm so as to stably produce a second output representative of the pitch of the acoustic vibration. A selector selectively feeds one of the first output and the second output to the tone generator so that the first detector and the second detector can cooperate to ensure responsive ...

Подробнее
16-02-2021 дата публикации

Singing voice separation with deep u-net convolutional networks

Номер: US0010923141B2
Принадлежит: Spotify AB, SPOTIFY AB

A system, method and computer product for training a neural network system. The method comprises applying an audio signal to the neural network system, the audio signal including a vocal component and a non-vocal component. The method also comprises comparing an output of the neural network system to a target signal, and adjusting at least one parameter of the neural network system to reduce a result of the comparing, for training the neural network system to estimate one of the vocal component and the non-vocal component. In one example embodiment, the system comprises a U-Net architecture. After training, the system can estimate vocal or instrumental components of an audio signal, depending on which type of component the system is trained to estimate.

Подробнее
29-10-2020 дата публикации

MUSIC DRIVEN HUMAN DANCING VIDEO SYNTHESIS

Номер: US20200342646A1
Принадлежит:

The present disclosure provides a method for generating a video of a body moving in synchronization with music by applying a first artificial neural network (ANN) to a sequence of samples of an audio waveform of the music to generate a first latent vector describing the waveform and a sequence of coordinates of points of body parts of the body, by applying a first stage of a second ANN to the sequence of coordinates to generate a second latent vector describing movement of the body, by applying a second stage of the second ANN to static images of a person in a plurality of different poses to generate a third latent vector describing an appearance of the person, and by applying a third stage of the second ANN to the first latent vector, the second latent vector, and the third latent vector to generate the video.

Подробнее
26-12-2019 дата публикации

AUDIO EXTRACTION APPARATUS, MACHINE LEARNING APPARATUS AND AUDIO REPRODUCTION APPARATUS

Номер: US20190392802A1
Принадлежит: CASIO COMPUTER CO., LTD.

A processor in an audio extraction apparatus performs a preprocessing operation to determine, for a stereo audio source including first channel audio data including an accompaniment sound and a vocal sound for a first channel and second channel audio data including an accompaniment sound and a vocal sound for a second channel, a difference between the first channel audio data and the second channel audio data to generate center cut audio data, and an audio extraction operation to input the first channel audio data, the second channel audio data and the center cut audio data to a trained machine learning model to extract any one of the accompaniment sound and the vocal sound.

Подробнее
09-04-2020 дата публикации

SYSTEMS AND METHODS FOR CAPTURING AND INTERPRETING AUDIO

Номер: US20200111468A1
Принадлежит:

A device is provided for capturing vibrations produced by an object such as a musical instrument such as a cymbal of a drum kit. The device comprises a detectable element, such as a ferromagnetic element, such as a metal shim and a sensor spaced apart from and located relative to the musical instrument. The detectable element is located between the sensor and the musical instrument. When the musical instrument vibrates, the sensor remains stationary and the detectable element is vibrated relative to the sensor by the musical instrument. 1. A device for capturing vibrations produced by an object , the device comprising:a detectable element located relative to the object;a sensor spaced apart from the object and located relative to the object;wherein the detectable element is located between the sensor and the object, and wherein when the object vibrates, the sensor remains stationary and the detectable element is vibrated relative to the sensor by the object.2. The device of wherein the object is a musical instrument.3. The device of wherein the object is a cymbal claim 2 , and wherein the device is a cymbal clamp.4. The device of wherein the detectable element is a ferromagnetic shim claim 3 , and wherein the sensor is an inductive coil claim 3 , and wherein the device further comprises a magnet fixed adjacent the inductive coil such that the inductive coil and the magnet remain stationary when the object vibrates.5. The device of wherein the ferromagnetic shim is spaced apart from the cymbal by a first pad or portion of a pad claim 4 , and wherein the ferromagnetic shim is spaced apart from the sensor by a second pad or portion of a pad claim 4 , such that the vibration of the ferromagnetic shim is proportional to the vibration of the cymbal.6. The device of claim 5 , wherein the first pad or portion of a pad and the second pad or portion of a pad comprise felt and the ferromagnetic shim is a metal shim.7. A cymbal clamp comprising:a cymbal clamping location ...

Подробнее
06-02-2020 дата публикации

AUTOMATIC ISOLATION OF MULTIPLE INSTRUMENTS FROM MUSICAL MIXTURES

Номер: US20200042879A1
Принадлежит:

A system, method and computer product for training a neural network system. The method comprises inputting an audio signal to the system to generate plural outputs f(X, Θ). The audio signal includes one or more of vocal content and/or musical instrument content, and each output f(X, Θ) corresponds to a respective one of the different content types. The method also comprises comparing individual outputs f(X, Θ) of the neural network system to corresponding target signals. For each compared output f(X, Θ), at least one parameter of the system is adjusted to reduce a result of the comparing performed for the output f(X, Θ), to train the system to estimate the different content types. In one example embodiment, the system comprises a U-Net architecture. After training, the system can estimate various different types of vocal and/or instrument components of an audio signal, depending on which type of component(s) the system is trained to estimate. 1. A method for estimating a component of a provided audio signal , comprising:converting the provided audio signal to an image;inputting the image to a U-Net trained to estimate different types of content, the different types of content including one or more of vocal content and musical instrument content, wherein, in response to the input image, the U-Net outputs signals, each representing a corresponding one of the different types of content; andconverting each of the signals output by the U-Net to an audio signal.2. The method of claim 1 , wherein the U-Net comprises:a convolution path for encoding the image; anda plurality of deconvolution paths for decoding the image encoded by the convolution path, each of the deconvolution paths corresponding to one of the different types of content.3. The method of claim 2 , further comprising applying an output of at least one of the deconvolution paths as a mask to the image.4. The method of claim 1 , wherein the musical instrument content includes different types of musical ...

Подробнее
27-04-2023 дата публикации

AUTOMATIC ISOLATION OF MULTIPLE INSTRUMENTS FROM MUSICAL MIXTURES

Номер: US20230125789A1
Принадлежит: Spotify AB

A system, method and computer product for training a neural network system. The method comprises inputting an audio signal to the system to generate plural outputs f(X, Θ). The audio signal includes one or more of vocal content and/or musical instrument content, and each output f(X, Θ) corresponds to a respective one of the different content types. The method also comprises comparing individual outputs f(X, Θ) of the neural network system to corresponding target signals. For each compared output f(X, Θ), at least one parameter of the system is adjusted to reduce a result of the comparing performed for the output f(X, Θ), to train the system to estimate the different content types. In one example embodiment, the system comprises a U-Net architecture. After training, the system can estimate various different types of vocal and/or instrument components of an audio signal, depending on which type of component(s) the system is trained to estimate.

Подробнее
31-05-2022 дата публикации

Transition functions of decomposed signals

Номер: US0011347475B2
Принадлежит: ALGORIDDIM GMBH

A device for processing audio signals, including: first and second input units providing first and second input signals of first and second audio tracks, a decomposition unit to decompose the first input audio signal to obtain a plurality of decomposed signals, a playback unit configured to start playback of a first output signal obtained from recombining at least a first decomposed signal at a first volume level with a second decomposed signal at a second volume level, such that the first output signal substantially equals the first input signal, and a transition unit for performing a transition between playback of the first output signal and playback of a second output signal obtained from the second input signal. The transition unit has a volume control section adapted for reducing the first and second volume levels according to first and second transition functions.

Подробнее
09-01-2024 дата публикации

Generating audio using neural networks

Номер: US0011869530B2
Принадлежит: DeepMind Technologies Limited

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.

Подробнее
02-10-1996 дата публикации

SIGNAL-ANALYSIS DEVICE WITH AT LEAST ONE TENSIONED STRING AND A RECEIVER

Номер: EP0000734567A1
Автор: SZALAY, Andreas
Принадлежит:

Described is a signal-analysis device (1) with at least one tensioned string (E1, H2, G3, D4, A5, E6) whose oscillating length can be varied by pressing the string against at least one tie-bar, the device also having a receiver (2) and an evaluation unit (3 to 9) connected to the receiver. The aim of the invention is to provide a guitar synthesizer which provides the desired note data relatively rapidly after stimulation of the string. This is achieved by vertue of the fact that the evaluation unit (3 to 9) detects pulses or pulse groups which, following stimulation of the string (E1, H2, G3, D4, A5, E6), pass along the string past the receiver (2), the evaluation unit generating from the sequence of pulses or pulse groups a signal which represents a note.

Подробнее
21-12-2022 дата публикации

MUSIC CONTENT GENERATION

Номер: EP4104072A1
Принадлежит:

Подробнее
25-01-2023 дата публикации

Audio generation methods and systems

Номер: GB0002609019A
Принадлежит:

A method of generating audio assets, comprising the steps of receiving an input audio asset having a first duration S201, generating an input image representative of the input audio asset S202, training a generative model on the input image and implementing the trained generative model to generate an output image representative of an output audio asset having a second duration different to the first duration S203, and generating the output audio asset based on the output image S204. The input and output images may be spectrograms representing the audio data. The audio data may be taken from a video game and the generation of the output image may be influenced at least in part by video game information. The input receiving step may also comprise receiving a second input audio asset which is used with the first audio asset to form a multi-channel image that represents both input audio assets and wherein the output image is also a multi-channel image ...

Подробнее
08-01-2002 дата публикации

System and method for prediction of musical preferences

Номер: AU0007095301A
Принадлежит:

Подробнее
25-07-2017 дата публикации

Cognitive music engine using unsupervised learning

Номер: US0009715870B2

A method for generating a musical composition based on user input is described. A first set of musical characteristics is extracted from a first input musical piece. The first set of music characteristics is prepared as an input vector into an unsupervised neural net comprised of a plurality of computing layers by perturbing the first set of musical characteristics according to a user intent expressed in the user input to create a perturbed vector. The perturbed vector is input into the first set of nodes of the unsupervised neural net. The unsupervised neural net is operated to calculate an output vector from a highest set of nodes. The output vector is used to create an output musical piece.

Подробнее
10-09-2019 дата публикации

Audio information processing method and apparatus

Номер: US0010410615B2

An audio information processing method and apparatus are provided. The method includes decoding a first audio file to acquire a first audio subfile corresponding to a first sound channel and a second audio subfile corresponding to a second sound channel; extracting first audio data from the first audio subfile; extracting second audio data from the second audio subfile; acquiring a first audio energy value of the first audio data; acquiring a second audio energy value of the second audio data; and determining an attribute of at least one of the first sound channel and the second sound channel based on the first audio energy value and the second audio energy value.

Подробнее
09-04-2020 дата публикации

COMPLEX EVOLUTION RECURRENT NEURAL NETWORKS

Номер: US20200111483A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using complex evolution recurrent neural networks. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A first vector sequence comprising audio features determined from the audio data is generated. A second vector sequence is generated, as output of a first recurrent neural network in response to receiving the first vector sequence as input, where the first recurrent neural network has a transition matrix that implements a cascade of linear operators comprising (i) first linear operators that are complex-valued and unitary, and (ii) one or more second linear operators that are non-unitary. An output vector sequence of a second recurrent neural network is generated. A transcription for the utterance is generated based on the output vector sequence generated by the second recurrent neural network. The transcription for the ...

Подробнее
13-08-2003 дата публикации

CONTROL STRUCTURE FOR SOUND SYNTHESIS

Номер: EP0000858650B1

Подробнее
29-07-1999 дата публикации

Electronic sound generator for re-synthesizer

Номер: DE0019734905C1

One or more oscillators (VCO K1...Kn) are activated or deactivated in accordance with control signals (K(x), A(x)) obtained from a signal characteristic to be reproduced based on pattern recognition (KL), so that the oscillators are actively switched for the oscillation waveform most similar to the one to be reproduced. A waveform memory (WT) stores several different waveforms. A method of electronic sound production is also claimed.

Подробнее
27-08-2003 дата публикации

DIGITAL MUSIC PLAYBACK APPARATUS AUTOMATICALLY SELECTING AND STORING MUSIC PART, AND METHOD THEREOF

Номер: KR20030069419A
Автор: AHN, HO SUNG
Принадлежит:

PURPOSE: A digital music playback apparatus automatically selecting and storing music part, and a method thereof are provided to automatically select and store only music signals from radio broadcast contents. CONSTITUTION: A digital signal processor(210) converts broadcasting signals into digital data or digital data into analog signals, compresses the digital data into music data to encode the digital data, or decodes the compressed digital data. A music extracting unit(220) divides the digital data output from the digital signal processor into music data and non-music data according to a music extracting algorithm for extracting only the music data, and generates beginning and end data recognizing the beginning and the end of the extracted music data. A key input unit(230) includes a broadcasting key(232) converting an operation mode of a digital music playback apparatus into a mode of receiving radio broadcast, and a recording key(234) for recording broadcasted music signal. A microprocessor ...

Подробнее
23-10-2018 дата публикации

método para analisar um sinal de áudio de música digital

Номер: BRPI0823192A2
Принадлежит:

Подробнее
24-09-2009 дата публикации

System and Method for Evolving Music Tracks

Номер: US2009235809A1
Принадлежит:

Systems and methods of generating music tracks are disclosed. One such method, for generating one type of music track from another type of music track, is implemented in a computer. The method includes the steps of receiving a music track input having a first type into an artificial neural network (ANN), and producing a music track output having a second type from the ANN, based upon the music track input having the first type.

Подробнее
28-05-2020 дата публикации

METHOD OF COMPOSING A PIECE OF DIGITAL MUSIC USING MUSICAL EXPERIENCE DESCRIPTORS TO INDICATE WHAT, WHEN AND HOW MUSICAL EVENTS SHOULD APPEAR IN THE PIECE OF DIGITAL MUSIC AUTOMATICALLY COMPOSED AND GENERATED BY AN AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM

Номер: US20200168197A1
Принадлежит: Amper Music, Inc.

An automated music composition and generation system having a system user interface operably connected to an automated music composition and generation engine, and supporting a method of composing a piece of digital music using musical experience descriptors as to indicate what, when and how particular musical events should occur in the piece of digital music to be automatically composed and generated. The method uses the system user interface to select one or more musical experience descriptors and applying the musical experience descriptors along a timeline representation of a piece of digital music to be automatically composed and generated by the automated music composition and generation engine. 1. A method of composing a piece of digital music using an automated music composition and generation system being supplied with musical experience descriptors to characterize the piece of digital music to be automatically composed and generated by said automated music composition and generation system , said method comprising the steps of:(a) creating a project to automatically compose and generate a piece of digital music using an automated music composition and generation system having a system user interface operably connected to an automated music composition and generation engine;(b) using said system user interface to select one or more musical experience descriptors and applied said music experience descriptors along a timeline representation of a piece of music to be automatically composed and generated, so as to indicate what, when and how particular musical events should occur in the piece of digital music to be automatically composed and generated by said automated music composition and generation system;wherein said particular musical events are selected from the group consisting of music start, music stop, descriptor change, style change, volume change, structural change, and instrumentation change;(c) providing said selected and applied musical experience ...

Подробнее
07-01-2021 дата публикации

Method for making music recommendations and related computing device, and medium thereof

Номер: US20210004402A1
Автор: Bo Chen, Hanjie WANG, Hao YE, Yan Li
Принадлежит: Tencent Technology Shenzhen Co Ltd

This application discloses a method for making music recommendations. The method for making music recommendations is performed by a server device. The method includes obtaining a material for which background music is to be added; determining at least one visual semantic tag of the material, the at least one visual semantic tag describing at least one characteristic of the material; identifying a matched music matching the at least one visual semantic tag from a candidate music library; sorting the matched music according to user assessing information of a user corresponding to the material; screening the matched music based on a sorting result and according to a preset music screening condition; and recommending matched music obtained through the screening as candidate music of the material.

Подробнее
07-01-2021 дата публикации

MUSICAL PERFORMANCE ANALYSIS METHOD AND MUSICAL PERFORMANCE ANALYSIS APPARATUS

Номер: US20210005173A1
Автор: Li Bochen, MAEZAWA Akira
Принадлежит:

An apparatus is provided that accurately estimates a point at which a musical performance is started by a player. The apparatus includes the musical performance analysis unit , and the musical performance analysis unit obtains action data that includes a time series of feature data representing actions made by a player during a musical performance for a reference period and estimating a sound-production point based on the action data at an estimated point using a learned model L. 1. A musical performance analysis method realized by a computer , the method comprising:obtaining action data that includes a time series of feature data representing actions made by a player during a musical performance for a reference period; andestimating a sound-production point based on the action data at an estimated point using a learned model.2. The musical performance analysis method according to claim 1 , wherein the estimating of the sound-production point includes:calculating a probability that the estimated point, which follows the reference period, is the sound-production point, using the learned model; andestimating the sound-production point based on the probability using the learned model.3. The musical performance analysis method according to claim 2 , wherein:the reference period include a plurality of analysis points, andthe calculating of the probability sequentially calculates, for the plurality of analysis points on a time axis, the probability that the estimated point, which follows the plurality of analysis points, is the sound-production point.4. The musical performance analysis method according to claim 2 , wherein the estimating of the sound-production point estimates the sound-production point based on a distribution of a plurality of the probabilities corresponding to the plurality of respective analysis points.5. The musical performance analysis method according to claim 1 , further comprising:generating the time series of feature data based on image data ...

Подробнее
14-01-2021 дата публикации

Electronic musical instrument, electronic musical instrument control method, and storage medium

Номер: US20210012758A1
Принадлежит: Casio Computer Co Ltd

An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.

Подробнее
18-01-2018 дата публикации

SYSTEM FOR EMBEDDING ELECTRONIC MESSAGES AND DOCUMENTS WITH AUTOMATICALLY-COMPOSED MUSIC USER-SPECIFIED BY EMOTION AND STYLE DESCRIPTORS

Номер: US20180018948A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation system allowing uses to create and deliver electronic messages and documents such as text, SMS and email, augmented with automatically-composed music generated using user-selected music emotion and style descriptors. The automated music composition and generation system includes an automated music composition and generation engine operably connected to a system user interface, and the infrastructure of the Internet. Mobile and desktop client machines provide text, SMS and/or email services supported on the Internet. Each client machine has a text application, SMS application and/or email application that is augmented by the addition of automatically-composed music by users using the automated music composition and generation engine. By selecting and providing musical emotion and style descriptor icons to the engine, music is automatically composed, generated, and embedded in text, SMS and/or email messages for delivery to other client machines over the infrastructure of the Internet. 1. An Internet-based automated music composition and generation system allowing uses to create and deliver text , SMS and email messages augmented with automatically composed music generated using user-selected music emotion and style descriptors , said Internet-based automated music composition and generation system comprising:an automated music composition and generation engine operably connected to a system user interface, and the infrastructure of the Internet; andplurality of mobile and desktop client machines providing text, SMS and email services supported on the Internet;wherein each said client machine has a text application, SMS application and email application that can be augmented by the addition of automatically composed music by users using said automated music composition and generation engine, by selecting musical emotion descriptor icons, and musical style descriptor icons, that are provided to said automated music ...

Подробнее
28-01-2021 дата публикации

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

Номер: US20210027753A1
Принадлежит: CASIO COMPUTER CO., LTD.

An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color. 1. An electronic musical instrument comprising:an operation unit that receives a user performance;a memory that stores a trained model that has been trained and learned singing voices of a singer; andat least one processor, in accordance with a user operation on the operation unit, obtaining lyric data and waveform data corresponding to a first tone color;', 'inputting the obtained lyric data to the trained model so as to cause the trained model to output acoustic feature data in response thereto;', 'generating waveform data corresponding to a singing voice of the singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and', 'outputting a singing voice based on the generated waveform data corresponding to the second tone color., 'wherein the at least one processor performs the following2. The electronic musical instrument according to claim 1 ,wherein the waveform data corresponding to the first tone color is waveform data corresponding to a sound of a musical instrument,wherein the acoustic feature data includes spectral data corresponding to the singing voice of the ...

Подробнее
07-02-2019 дата публикации

Methods, systems, articles of manufacture and apparatus for generating a response for an avatar

Номер: US20190043239A1
Принадлежит: Intel Corp

Methods, apparatus, systems and articles of manufacture are disclosed for generating an audiovisual response for an avatar. An example method includes converting a first digital signal representative of first audio including a first tone, the first digital signal incompatible with a model, to a plurality of binary values representative of a first characteristic value of the first tone, the plurality of binary values compatible with the model, selecting one of a plurality of characteristic values associated with a plurality of probability values output from the model, the probability values incompatible for output via a second digital signal representative of second audio, as a second characteristic value associated with a second tone to be included in the second audio, the second characteristic value compatible for output via the second digital signal, and controlling the avatar to output an audiovisual response based on the second digital signal and a first response type.

Подробнее
06-02-2020 дата публикации

Singing voice separation with deep u-net convolutional networks

Номер: US20200043517A1
Принадлежит: SPOTIFY AB

A system, method and computer product for estimating a component of a provided audio signal. The method comprises converting the provided audio signal to an image, processing the image with a neural network trained to estimate one of vocal content and instrumental content, and storing a spectral mask output from the neural network as a result of the image being processed by the neural network. The neural network is a U-Net. The method also comprises providing the spectral mask to a client media playback device, which applies the spectral mask to a spectrogram of the provided audio signal, to provide a masked spectrogram. The media playback device also transforms the masked spectrogram to an audio signal, and plays back that audio signal via an output user interface.

Подробнее
10-03-2022 дата публикации

Electronic musical instrument, method, and storage medium

Номер: US20220076658A1
Принадлежит: Casio Computer Co Ltd

An electronic musical instrument includes: a plurality of keys that include at least first keys corresponding to a first pitch range and second keys corresponding to a second pitch range; and at least one processor, configured to perform the following: in accordance with a key operation in the first pitch range, determining a syllable position contained in a phrase; and in accordance with a key operation in the second pitch rang, instructing a sound production of a digitally synthesized sound corresponding to the determined syllable position.

Подробнее
08-03-2018 дата публикации

SYSTEMS AND METHODS FOR CAPTURING AND INTERPRETING AUDIO

Номер: US20180068646A1
Принадлежит:

A device is provided as part of a system, the device being for capturing vibrations produced by an object such as a musical instrument. Via a fixation element, the device is fixed to a drum. The device has a sensor spaced apart from a surface of the drum, located relative to the drum, and a magnet adjacent the sensor. The fixation element transmits vibrations from its fixation point on the drum to the magnet. Vibrations from the surface of the drum and from the magnet are transmitted to the sensor. A method may further be provided for interpreting an audio input, such as the output of the sensors within the system, the method comprising identifying an audio event or grouping of audio events within audio data, generating a model of the audio event that includes a representation of a timbre characteristic, and comparing that representation to expected representations. 1. A device for capturing vibrations produced by an object , the device comprising:a fixation element for fixing the device to an object;a first sensor for detecting vibration of the object at the fixation element; anda second sensor spaced apart from a surface of the object and located relative to the object.2. The device of wherein the object is a musical instrument.3. The device of wherein the second sensor is an optical sensor.4. The device of wherein the optical sensor is fixed relative to a visible target on a surface of the musical instrument.5. The device of wherein the musical instrument is a drum claim 1 , and the fixation element transmits vibrations from a drum rim to the first sensor.6. A system for capturing vibrations produced by an object claim 1 , the system comprising: a fixation element for fixing the device to an object;', 'a first sensor for detecting vibration of the object at the fixation element; and', 'a second sensor spaced apart from a surface of the object and located relative to the object, and, 'a device for capturing vibrations produced by an object, the device comprisinga ...

Подробнее
15-03-2018 дата публикации

METHOD FOR ENCODING SIGNALS, METHOD FOR SEPARATING SIGNALS IN A MIXTURE, CORRESPONDING COMPUTER PROGRAM PRODUCTS, DEVICES AND BITSTREAM

Номер: US20180075863A1
Принадлежит:

A method is proposed for encoding at least two signals. The method includes mixing the at least two signals in a mixture; sampling a map Z representative of locations of the at least two signals in a time-frequency plane at sampling locations, the sampling delivering a first list of values Z; and transmitting the mixture of the at least two signals and information representative of the first list of values Z. The disclosure also relates to the corresponding method for separating signals in a mixture, and corresponding computer program products, devices and bitstream. 1. A method for encoding at least two signals , wherein said method comprises:{'sub': 'Ω', 'sampling, at sampling locations, a map Z identifying which of said at least two signals is dominantly active at locations of a time-frequency representation of a mixture of said at least two signals, said sampling delivering a first list of values Z, said first list of values being ordered as a function of the order of the sampling locations; and'}{'sub': 'Ω', 'transmitting said mixture of the at least two signals and information representative of said first list of values Z.'}2. The method according to wherein said sampling locations are based on a sampling distribution.3. The method according to wherein said sampling distribution is computed as a function of an energy content of said mixture in said time-frequency representation.4. The method according to wherein said sampling distribution is computed based on a first graph G connecting locations in said time-frequency representation based on a similarity between at least two first feature vectors based on said mixture claim 2 , similar feature vectors at different locations in said time-frequency representation indicating a contribution of similar signals at said different locations.5. The method according to wherein said sampling distribution is computed by obtaining at least two different first graphs and electing one of said first graphs for deriving the ...

Подробнее
30-03-2017 дата публикации

MACHINES, SYSTEMS, PROCESSES FOR AUTOMATED MUSIC COMPOSITION AND GENERATION EMPLOYING LINGUISTIC AND/OR GRAPHICAL ICON BASED MUSICAL EXPERIENCE DESCRIPTORS

Номер: US20170092247A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

Automated music composition and generation machine, systems and methods, and architectures that allow anyone, without possessing any knowledge of music theory or practice, or expertise in music or other creative endeavors, to instantly create unique and professional-quality music, synchronized to any kind of media content, including, but not limited to, video, photography, slideshows, and any pre-existing audio format, as well as any object, entity, and/or event, wherein the system user only requires knowledge of ones own emotions and/or artistic concepts which are to be expressed in a piece of music that will ultimately composed by the automated composition and generation system of the present invention. 139-. (canceled)40. An automated music composition and generation system driven by emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by a system user , comprising:a system user interface for enabling system users to provide emotion-type and style-type musical experience descriptors and time and/or space parameters to said automated music composition and generation system for processing; andan automated music composition and generation engine, operably connected to said system user interface, and including a plurality of function-specific subsystems cooperating together to compose and generate one or more digital pieces of music, each of which contains a set of musical notes arranged and performed using an orchestration of one or more musical instruments selected for the digital piece of music;wherein said automated music composition and generation engine including an arrangement of various function-specific subsystems including:a parameter transformation subsystem for receiving said emotion-type and style-type musical experience descriptors and time and/or space parameters from said system user interface, and processing and transforming said parameters and producing music-theoretic based parameters for use by one ...

Подробнее
13-04-2017 дата публикации

COGNITIVE MUSIC ENGINE USING UNSUPERVISED LEARNING

Номер: US20170103740A1
Принадлежит:

A method for generating a musical composition based on user input is described. A first set of musical characteristics is extracted from a first input musical piece. The first set of music characteristics is prepared as an input vector into an unsupervised neural net comprised of a plurality of computing layers by perturbing the first set of musical characteristics according to a user intent expressed in the user input to create a perturbed vector. The perturbed vector is input into the first set of nodes of the unsupervised neural net. The unsupervised neural net is operated to calculate an output vector from a highest set of nodes. The output vector is used to create an output musical piece. 1. A method for generating a musical composition based on user input , comprising:responsive to user input, extracting a first set of musical characteristics from a first input musical piece;preparing the first set of musical characteristics as an input vector into an unsupervised neural net comprised of a plurality of computing layers, wherein each computing layer is composed of a set of nodes, wherein the input vector is prepared by perturbing the first set of musical characteristics according to a user intent expressed in the user input to create a perturbed input vector;providing the perturbed input vector into a first set of nodes in a first visible layer of the unsupervised neural net;operating the unsupervised neural net using the perturbed input vector to calculate an output vector from a higher set of nodes of the unsupervised neural net; andusing the output vector to create an output musical piece.2. The method as recited in claim 1 , wherein the perturbing is created by inserting values into respective nodes of the first set of nodes in the first visible layer according to a rule selected according to the user intent while inserting respective members of the first set of musical characteristics into other nodes of the first set of nodes.3. The method as recited in ...

Подробнее
13-04-2017 дата публикации

Systems and methods for capturing and interpreting audio

Номер: US20170103743A1
Принадлежит: Sunhouse Technologies Inc

A device is provided as part of a system, the device being for capturing vibrations produced by an object such as a musical instrument. Via a fixation element, the device is fixed to a drum. The device has a sensor spaced apart from a surface of the drum, located relative to the drum, and a magnet adjacent the sensor. The fixation element transmits vibrations from its fixation point on the drum to the magnet. Vibrations from the surface of the drum and from the magnet are transmitted to the sensor. A method may further be provided for interpreting an audio input, such as the output of the sensors within the system, the method comprising identifying an audio event or grouping of audio events within audio data, generating a model of the audio event that includes a representation of a timbre characteristic, and comparing that representation to expected representations.

Подробнее
20-04-2017 дата публикации

Method for following a musical score and associated modeling method

Номер: US20170110102A1
Принадлежит: Makemusic

A method for following a musical score in real time. At least one sound emitted by a performer is recorded. At least one chromatic vector is estimated. The chromatic vector is compared with theoretical chromatic vectors of the musical score. A transition between the chromatic vector and a previous chromatic vector with theoretical transitions of the musical score is compared. A work position of the performer depending on a previous work position is estimated from the comparison of the chromatic vector and the comparison of the transition. The recording is carried out for a suitable period depending on the ratio between a period of the transition and a reference period.

Подробнее
11-04-2019 дата публикации

Speech recognition using convolutional neural networks

Номер: US20190108833A1
Принадлежит: DeepMind Technologies Ltd, Google LLC

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing speech recognition by generating a neural network output from an audio data input sequence, where the neural network output characterizes words spoken in the audio data input sequence. One of the methods includes, for each of the audio data inputs, providing a current audio data input sequence that comprises the audio data input and the audio data inputs preceding the audio data input in the audio data input sequence to a convolutional subnetwork comprising a plurality of dilated convolutional neural network layers, wherein the convolutional subnetwork is configured to, for each of the plurality of audio data inputs: receive the current audio data input sequence for the audio data input, and process the current audio data input sequence to generate an alternative representation for the audio data input.

Подробнее
18-04-2019 дата публикации

COMPLEX LINEAR PROJECTION FOR ACOUSTIC MODELING

Номер: US20190115013A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using complex linear projection are disclosed. In one aspect, a method includes the actions of receiving audio data corresponding to an utterance. The method further includes generating frequency domain data using the audio data. The method further includes processing the frequency domain data using complex linear projection. The method further includes providing the processed frequency domain data to a neural network trained as an acoustic model. The method further includes generating a transcription for the utterance that is determined based at least on output that the neural network provides in response to receiving the processed frequency domain data. 1. (canceled)2. A computer-implemented method comprising:receiving, by one or more computers, audio data corresponding to an utterance;generating, by the one or more computers, frequency domain data using the audio data;processing, by the one or more computers, the frequency domain data using a linear transformation;providing, by the one or more computers, the processed frequency domain data to a neural network of a speech recognition model; andgenerating, by the one or more computers, a transcription for the utterance that is determined based at least on output that the neural network provides in response to receiving the processed frequency domain data.3. The method of claim 2 , wherein the linear transformation is a complex linear projection.4. The method of claim 2 , wherein the speech recognition model is an acoustic model.5. The method of claim 2 , wherein processing the frequency domain data using linear transformation comprises processing the frequency domain data for each of multiple input frames of the audio data.6. The method of claim 2 , comprising:generating a convolutional filter with one or more real filter weights; andgenerating a frequency domain filter with one or more complex ...

Подробнее
25-08-2022 дата публикации

TRANSITION FUNCTIONS OF DECOMPOSED SIGNALS

Номер: US20220269476A1
Принадлежит: ALGORIDDIM GMBH

A device including: first and second input units providing first and second input signals of first and second audio tracks, a decomposition unit to decompose the first input audio signal to obtain decomposed signals, a playback unit to start playback of a first output signal obtained from recombining at least first and second decomposed signals at first and second volume levels, respectively, and a transition unit for performing a transition between playback of the first output signal and playback of a second output signal obtained from the second input signal. The transition unit is adapted for reducing the first/second volume levels according to first/second transition functions. The device includes an analyzing unit to analyze an audio signal to determine a song part junction between two song parts. The transition time interval of at least one of the transition functions is set such as to include the song part junction. 1. A method for processing audio signals , comprising:providing a first input signal of a first input audio track and a second input signal of a second input audio track;decomposing the first input signal to obtain a plurality of decomposed signals, comprising at least a first decomposed signal and a second decomposed signal different from the first decomposed signal;assigning a first volume level to the first decomposed signal and a second volume level to the second decomposed signal;starting playback of a first output signal obtained from recombining at least the first decomposed signal at the first volume level with the second decomposed signal at the second volume level, such that the first output signal substantially equals the first input signal;while playing the first output signal, reducing the first volume level according to a first transition function and reducing the second volume level according to a second transition function different from said first transition function, wherein each of the transition functions assigns a ...

Подробнее
27-05-2021 дата публикации

PROVIDING PERSONALIZED SONGS IN AUTOMATED CHATTING

Номер: US20210158789A1
Принадлежит: Microsoft Technology Licensing, LLC

The present disclosure provides method and apparatus for providing personalized songs in automated chatting. A message may be received in a chat flow. Personalized lyrics of a user may be generated based at least on a personal language model of the user in response to the message. A personalized song may be generated based on the personalized lyrics. The personalized song may be provided in the chat flow. 1. A method for providing personalized songs in automated chatting , comprising: receiving a message in a chat flow; generating personalized lyrics of a user based at least on a personal language model of the user in response to the message; generating a personalized song based on the personalized lyrics; and providing the personalized song in the chat flow.2. The method of claim 1 , wherein the message indicates a public song claim 1 , and the generating the personalized lyrics comprises: generating the personalized lyrics through mapping sentences in the public song's lyrics to the user's personalized sentences based at least on the personal language model.3. The method of claim 2 , wherein the generating the personalized song comprises at least one of: generating the user's voices for the personalized lyrics claim 2 , and applying the public song's tune on the user's voices; and generating voices of an original singer of the public song for the personalized lyrics claim 2 , and applying the public song's tune on the original singer's voices.4. The method of claim 1 , wherein the message contains at least one keyword or indicates to retrieve at least one keyword automatically claim 1 , and the generating the personalized lyrics comprises: generating the personalized lyrics through mapping the at least one keyword to the user's personalized sentences based at least on the personal language model.5. The method of claim 4 , wherein the generating the personalized song comprises: generating a tune of the personalized song based on the personalized lyrics; generating ...

Подробнее
27-05-2021 дата публикации

AUTONOMOUS GENERATION OF MELODY

Номер: US20210158790A1
Принадлежит:

Implementations of the subject matter described herein provide a solution that enables a machine to automatically generate a melody. In this solution, user emotion and/or environment information is used to select a first melody feature parameter from a plurality of melody feature parameters, wherein each of the plurality of melody feature parameters corresponds to a music style of one of a plurality of reference melodies. The first melody feature parameter is further used to generate a first melody that conforms to the music style and is different from the reference melody. Thus, a melody that matches user emotions and/or environmental information may be automatically created. 1. A computer-implemented method , comprising:detecting a user emotion and/or environmental information;selecting a first melody feature parameter from a plurality of melody feature parameters based on the user emotion and/or the environmental information, each of the plurality of melody feature parameters corresponding to a music style of one of a plurality of reference melodies; andgenerating a first melody conforming to the music style based on the first melody feature parameter, the first melody being different from the reference melodies.2. The method according to claim 1 , further comprising:generating a second melody adjacent to the first melody based on the first melody, the second melody being different from the first melody, and a music style of the second melody being the same as the music style of the first melody.3. The method according to claim 1 , further comprising:determining the plurality of melody feature parameters by encoding the plurality of reference melodies with a Variational Autoencode (VAE) model.4. The method according to claim 3 , wherein determining the plurality of melody feature parameters comprises: for each melody feature parameter claim 3 ,encoding a plurality of tracks in the reference melody with the VAE model to determine a plurality of track features ...

Подробнее
01-09-2022 дата публикации

Accompaniment classification method and apparatus

Номер: US20220277040A1
Автор: Dong Xu

An accompaniment classification method and apparatus is provided. The method includes the following. A first type of audio features of a target accompaniment is obtained (S301, S401). Data normalization is performed on each kind of audio features in the first type of audio features of the target accompaniment to obtain a first feature-set of the target accompaniment and the first feature-set is input into a first classification model for processing (S302, S402). A first probability value output by the first classification model for the first feature-set is obtained (S303, S403). An accompaniment category of the target accompaniment is determined to be a first category of accompaniments when the first probability value is greater than a first classification threshold (S404). The accompaniment category of the target accompaniment is determined to be other categories of accompaniments when the first probability value is less than or equal to the first classification threshold.

Подробнее
23-04-2020 дата публикации

SPEECH RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS

Номер: US20200126539A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing speech recognition by generating a neural network output from an audio data input sequence, where the neural network output characterizes words spoken in the audio data input sequence. One of the methods includes, for each of the audio data inputs, providing a current audio data input sequence that comprises the audio data input and the audio data inputs preceding the audio data input in the audio data input sequence to a convolutional subnetwork comprising a plurality of dilated convolutional neural network layers, wherein the convolutional subnetwork is configured to, for each of the plurality of audio data inputs: receive the current audio data input sequence for the audio data input, and process the current audio data input sequence to generate an alternative representation for the audio data input. 1. A neural network system implemented by one or more computers ,wherein the neural network system is configured to perform a sequence processing task by processing an input sequence of data elements comprising a plurality of data elements to generate a neural network output which characterizes the input sequence, and [ receive a current input sequence comprising the data element and the data elements that precede the data element in the input sequence, and', 'process the current input sequence to generate an alternative representation for the data element, wherein the dilated convolutional neural network layers are causal convolutional neural network layers and the alternative representation for the data element does not depend on any data elements that follow the data element in the input sequence; and, 'a convolutional subnetwork comprising a plurality of dilated convolutional neural network layers, wherein the convolutional subnetwork is configured to, for each of the plurality of data elements, 'an output subnetwork, wherein the output subnetwork is ...

Подробнее
03-06-2021 дата публикации

Computer-implemented method and device for generating frequency component vector of time-series data

Номер: US20210166128A1
Автор: Kanru HUA, Ryunosuke DAIDO
Принадлежит: Yamaha Corp

A computer-implemented method generates a frequency component vector of time series data, by executing a first process and a second process in each unit step. The first process includes: receiving first data; and processing the first data using a first neural network to generate intermediate data. The second process includes: receiving the generated intermediate data; and generating a plurality of component values corresponding to a plurality of frequency bands based on the generated intermediate data such that: a first component value corresponding to a first frequency band is generated using a second neural network based on the generated intermediate data; and a second component value corresponding to a second frequency band different from the first frequency band is generated using the second neural network based on the generated intermediate data and the generated first component value corresponding to the first frequency band.

Подробнее
08-09-2022 дата публикации

METHOD FOR ACCOMPANIMENT PURITY CLASS EVALUATION AND RELATED DEVICES

Номер: US20220284874A1
Автор: Xu Dong

A method for accompaniment purity class evaluation and related devices are provided. Multiple first accompaniment data and a label corresponding to each of the multiple first accompaniment data are obtained, the label being used to indicate that corresponding first accompaniment data is pure instrumental accompaniment data or instrumental accompaniment data with background noise. An audio feature of each of the multiple first accompaniment data is extracted. Model training is performed according to the audio feature of each of the multiple first accompaniment data and the label corresponding to each of the multiple first accompaniment data, to obtain a neural network model for accompaniment purity class evaluation, a model parameter of the neural network model being determined according to an association relationship between the audio feature of each of the multiple first accompaniment data and the label corresponding to each of the multiple first accompaniment data. 1. A method for accompaniment purity class evaluation , comprising:obtaining a plurality of first accompaniment data and a label corresponding to each of the plurality of first accompaniment data, the label corresponding to each of the plurality of first accompaniment data being used to indicate that corresponding first accompaniment data is pure instrumental accompaniment data or instrumental accompaniment data with background noise;extracting an audio feature of each of the plurality of first accompaniment data; andperforming model training according to the audio feature of each of the plurality of first accompaniment data and the label corresponding to each of the plurality of first accompaniment data, to obtain a neural network model for accompaniment purity class evaluation, a model parameter of the neural network model being determined according to an association relationship between the audio feature of each of the plurality of first accompaniment data and the label corresponding to each of the ...

Подробнее
10-06-2021 дата публикации

INFORMATION PROCESSING DEVICE FOR DATA REPRESENTING MOTION

Номер: US20210174771A1
Автор: MAEZAWA Akira
Принадлежит:

An information processing method includes generating a change parameter relating to a process in which a temporal relationship between a first motion and a second motion changes, by inputting, into a trained model, first time-series data that represent a content of the first motion, and second time-series data that represent a content of the second motion in parallel to the first motion. 1. An information processing method comprising:generating a change parameter relating to a process in which a temporal relationship between a first motion and a second motion changes, by inputting, into a trained model, first time-series data that represent a content of the first motion, and second time-series data that represent a content of the second motion in parallel to the first motion.2. The information processing method according to claim 1 , wherein a first parameter relating to a process in which a temporal error of the first motion with respect to the second motion changes, and', 'a second parameter relating to a process in which a temporal error of the second motion with respect to the first motion changes., 'the change parameter includes'}3. The information processing method according to claim 2 , whereinthe first parameter is a parameter of an autoregressive process that represents the process in which the temporal error of the first motion with respect to the second motion changes, andthe second parameter is a parameter of an autoregressive process that represents the process in which the temporal error of the second motion with respect to the first motion changes.4. The information processing method according to claim 1 , whereinthe first motion is a performance of a first performance part from among a plurality of performance parts of a musical piece,the second motion is a performance of a second performance part, excluding the first performance part, from among the plurality of performance parts, andthe temporal relationship between the first motion and the second ...

Подробнее
23-05-2019 дата публикации

COMPLEX EVOLUTION RECURRENT NEURAL NETWORKS

Номер: US20190156819A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using complex evolution recurrent neural networks. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A first vector sequence comprising audio features determined from the audio data is generated. A second vector sequence is generated, as output of a first recurrent neural network in response to receiving the first vector sequence as input, where the first recurrent neural network has a transition matrix that implements a cascade of linear operators comprising (i) first linear operators that are complex-valued and unitary, and (ii) one or more second linear operators that are non-unitary. An output vector sequence of a second recurrent neural network is generated. A transcription for the utterance is generated based on the output vector sequence generated by the second recurrent neural network. The transcription for the utterance is provided. 1. A method performed by one or more computers , wherein the method comprises:receiving, by the one or more computers, audio data indicating acoustic characteristics of an utterance;generating, by the one or more computers, a first vector sequence comprising audio features determined from the audio data;generating, by the one or more computers, a second vector sequence that a first recurrent neural network outputs in response to receiving the first vector sequence as input, wherein the first recurrent neural network has a transition matrix that implements a cascade of linear operators comprising (i) first linear operators that are complex-valued and unitary, and (ii) one or more second linear operators that are non-unitary;generating, by the one or more computers, an output vector sequence that a second recurrent neural network outputs in response to receiving the second vector sequence as input, wherein the second recurrent neural network comprises one or ...

Подробнее
21-05-2020 дата публикации

METHOD, SYSTEM AND ARTIFICIAL NEURAL NETWORK

Номер: US20200159490A1
Принадлежит: SONY CORPORATION

It is disclosed a method comprising obtaining a target spectrum, obtaining a set of non-target spectra, the set of non-target spectra comprising one or more non-target spectra, summing the target spectrum and the set of non-target spectra to obtain a mixture spectrum, and training an artificial neural network by using the mixture spectrum as input of the neural network and by using a spectrum which is based on the target spectrum as desired output of the artificial neural network. 1. (canceled)2. A method for an interactive entertainment performed by artificial neural network circuitry implementing an artificial neural network , the method comprising:obtaining a target spectrum from an audio signal produced by a target instrument;obtaining one or more non-target spectra;generating a mixture spectrum by aggregating the target spectrum and the one or more non-target spectra by the artificial neural network circuitry;training the artificial neural network circuitry with the mixture spectrum as input and the target spectrum as desired output of the artificial neural network circuitry;receiving an input spectrum representing a polyphonic audio including multiple tracks; andprocessing the input spectrum to obtain an output spectrum using the trained artificial neural network circuitry by suppressing the target spectrum of the target instrument in one of the multiple tracks in the polyphonic audio, and preserving the rest of the multiple tracks as unaltered, whereinthe target spectrum is suppressed, using the artificial neural network of the trained artificial neural network circuitry, incompletely to be only reduced to a defined threshold.3. The method of claim 2 , wherein the target spectrum is deliberately suppressed incompletely to be only reduced to the defined threshold.4. The method of claim 2 , wherein the non-target spectra comprises a non-target spectrum that is obtained from an audio signal produced by a non-target instrument.5. The method of claim 2 , wherein ...

Подробнее
01-07-2021 дата публикации

INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE AND A NON-TRANSITORY STORAGE MEDIUM

Номер: US20210201865A1
Автор: MAEZAWA Akira, UEHARA Misa
Принадлежит:

An information processing method implemented by a computer, the information processing method including, generating pedal data representing an operation period of a pedal that extends sound production by key depression, from playing data representing a playing content. 1. An information processing method implemented by a computer for a keyboard musical instrument provided with a pedal function for a pedal to extend sound production of a depressed key representing a pitch of a keyboard thereof , the information processing method comprising:obtaining playing data representing a playing content that includes key data for operating the keyboard, but does not include any pedal data for operating the pedal or any included pedal data for operating the pedal is indistinguishable from the included key data; andgenerating pedal operation data representing a pedal operation period of the pedal from the obtained playing data.2. The information processing method according to claim 1 , wherein the playing data includes pitch operation data claim 1 , which represents a sound production period for each of a plurality of pitches of the keyboard.3. The information processing method according to claim 1 , wherein the generating generates the pedal operation data from the playing data by a learned model that has learned a relationship between an input corresponding to the playing data and an output corresponding to the pedal operation data.4. The information processing method according to claim 1 , wherein the generating further generates key operation data representing a key depression period together with the pedal operation period from the playing data.5. The information processing method according to claim 4 , wherein the generating generates both the pedal operation data and the key operation data from the playing data by a learned model that has learned a relationship between an input corresponding to the playing data and outputs corresponding to the key operation data and the ...

Подробнее
28-05-2020 дата публикации

LEARNING MODEL GENERATION METHOD, LEARNING MODEL GENERATION DEVICE, AND AUTOMATIC MUSICAL PERFORMANCE ROBOT

Номер: US20200168186A1
Автор: Yamamoto Kazuhiko
Принадлежит:

Disclosed is a learning model generation method executed by a computer, including: striking a percussion instrument with a striking member to emit a musical sound; and conducting machine learning upon receiving an input of the musical sound emitted from the percussion instrument, and generating, based on the machine learning, a learning model for outputting numerical values for setting musical performance parameters for an automatic musical performance of the percussion instrument that is struck when the striking member is driven. 1. A learning model generation method executed by a computer , comprising:striking a percussion instrument with a striking member to emit a musical sound; andconducting machine learning upon receiving an input of the musical sound emitted from the percussion instrument, and generating, based on the machine learning, a learning model for outputting numerical values for setting musical performance parameters for an automatic musical performance of the percussion instrument that is struck when the striking member is driven.2. The learning model generation method according to claim 1 ,wherein the machine learning is a process that uses an error in the musical sound emitted from the percussion instrument, andthe learning model outputs, with respect to the error in the musical sound at one time point on a time axis, adjustment values for adjusting basic values of the musical performance parameters at an other time point later than the one time point.3. The learning model generation method according to claim 2 ,wherein, as regards each of a plurality of time points on the time axis and with respect to an error in a musical sound that is emitted in accordance with the basic values of the musical performance parameters at a specific time point and with adjustment values generated at the specific time point, the learning model outputs adjustment values for adjusting the basic values of the musical performance parameters at a time point later than ...

Подробнее
28-05-2020 дата публикации

METHOD OF AND SYSTEM FOR AUTOMATICALLY GENERATING MUSIC COMPOSITIONS AND PRODUCTIONS USING LYRICAL INPUT AND MUSIC EXPERIENCE DESCRIPTORS

Номер: US20200168187A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation process within an automated music composition and generation system driven by lyrical musical experience descriptors. The process involves the system user accessing said automated music composition and generation system, employing an automated music composition and generation engine having a system user interface. The system user interface is used to select and provide musical experience descriptors, including lyrics, to the automated music composition and generation engine for processing by said automated music composition and generation engine. The system user initiates the automated music composition and generation engine to compose and generate music based on the musical experience descriptors and lyrics provided. 1. An automated music composition and generation system comprising:a system user interface for receiving musical experience descriptors and lyrical descriptions from a system user;an automated pitch event analyzing subsystem for automatically analyzing said lyrical description and generating corresponding pitch events; andan automated music composition and generation engine operably connected to said system user interface, and said automated pitch event analyzing subsystem;wherein said musical experience descriptors and said pitch events are used by said automated music composition and generation engine to automatically compose and generate a piece of music for musically scoring media or an event marker with said piece of music.2. The automated music composition and generation system of claim 1 , wherein said musical experience descriptors and said lyrical word descriptions are produced using a keyboard and/or a speech recognition interface operably connected to said system user interface.3. The automated music composition and generation system of claim 1 , wherein said media is selected from the group consisting of video recordings claim 1 , slide-shows claim 1 , audio recordings claim 1 , and event ...

Подробнее
28-05-2020 дата публикации

AUTONOMOUS MUSIC COMPOSITION AND PERFORMANCE SYSTEM EMPLOYING REAL-TIME ANALYSIS OF A MUSICAL PERFORMANCE TO AUTOMATICALLY COMPOSE AND PERFORM MUSIC TO ACCOMPANY THE MUSICAL PERFORMANCE

Номер: US20200168188A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An autonomous music composition and performance system employing an automated music composition and generation engine configured to receive musical signals from a set of a real or synthetic musical instruments being played by a group of human musicians. The system buffers and analyzes musical signals from the set of real or synthetic musical instruments, composes and generates music in real-time that augments the music being played by the band of musicians, and/or composes and generates music for subsequent playback, review and consideration by the human musicians. 1. An autonomous music composition and performance system comprising:an automated music composition and generation engine configured to(i) receive musical signals from a set of a real or synthetic musical instruments being played by a group of human musicians,(ii) buffer and analyze said musical signals from said set of real or synthetic musical instruments,(iii) compose and generate either music in real-time that augments the music being played by the band of musicians, or music for subsequent playback, review and consideration by said human musicians;a housing containing a system interface to said automated music composition and generation engine and further includinga display screen for selecting graphical icons and reviewing graphical information;a first set of audio signal input connectors for receiving electrical signals produced from said set of real or synthetic musical instruments;a second set of audio signal input connectors for receiving electrical signals produced from one or more microphones; andaudio output signal connector for delivering audio output signals to an audio transducer for audio reproduction;wherein said automated music composition and generation engine is configured to (i) receive electrical signals from said set of a real or synthetic musical instruments being played by said group of human musicians or said one or more microphones, (ii) buffer and analyze these electrical ...

Подробнее
28-05-2020 дата публикации

Automated music composition and generation system supporting automated generation of musical kernels for use in replicating future music compositions and production environments

Номер: US20200168190A1
Автор: Andrew H. Silverstein
Принадлежит: Amper Music Inc

An automated music composition and generation system provided with a system user interface enabling system users to review, select and provide one or more musical experience descriptors as well as time and/or space parameters, to an automated music composition and generation engine, operably connected to the system user interface. The automated music composition and generation engine includes a musical kernel generation subsystem for automatically analyzing and saving musical kernel elements automatically abstracted from the digital piece of music. The abstracted musical kernel elements distinguish the digital piece of music from any other digital piece of music automatically composed and generated by the automated music composition and generation system, and serve as a music kernel definition of the digital piece of composed music, which can be subsequently used during future automated music composition and generation processes, and in future music production environments, to replicate the digital piece of composed music at a later time, either with complete or incomplete accuracy, as required or desired by the system user.

Подробнее
28-05-2020 дата публикации

Automatically managing the musical tastes and preferences of system users based on user feedback and autonomous analysis of music automatically composed and generated by an automated music composition and generation system

Номер: US20200168191A1
Автор: Andrew H. Silverstein
Принадлежит: Amper Music Inc

An automated music composition and generation system having an automated music composition and generation engine for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user. The automated music composition and generation engine includes a user taste generation subsystem for automatically (i) determining the musical tastes and preferences of a system user based on user feedback and autonomous piece analysis, (ii) maintaining a system user profile reflecting musical tastes and preferences of each system user, and (iii) using the musical taste and preference information to change or modify the musical experience descriptors provided to the system to produce a digital piece of composed music composition that better reflects the musical tastes and preferences of the system user.

Подробнее
28-05-2020 дата публикации

AUTOMATICALLY MANAGING THE MUSICAL TASTES AND PREFERENCES OF A POPULATION OF USERS REQUESTING DIGITAL PIECES OF MUSIC AUTOMATICALLY COMPOSED AND GENERATED BY AN AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM

Номер: US20200168192A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation system having an automated music composition and generation engine for receiving, storing and processing musical experience descriptors as well as time and/or space parameters selected by the system user. The automated music composition and generation engine includes: a user taste generation subsystem for automatically determining the musical tastes and preferences of each system user based on user feedback and autonomous piece analysis, and maintaining a system user profile reflecting musical tastes and preferences of each system user; and a population taste aggregation subsystem for automatically aggregating the musical tastes and preferences of the population of system users, and modifying the musical experience descriptors and/or time and/or space parameters provided to the automated music composition and generation engine, so that the automatically generated digital pieces of composed music better reflect the musical tastes and preferences of the population of system users and more accurately and quickly meet future system user requests for automated music compositions. 1. An automated music composition and generation system comprising:a system user interface for enabling a population of system users to compose and generate digital pieces of music, by selecting one or more musical experience descriptors, as well as time and/or space parameters; andan automated music composition and generation engine, operably connected to said system user interface, for receiving, storing and processing said musical experience descriptors and time and/or space parameters selected by each system user in said population of system users;wherein said automated music composition and generation engine to automatically composes and generates one or more digital pieces of music in response to each set of said musical experience descriptors and time and/or space parameters selected by one of said system users in said population of system ...

Подробнее
28-05-2020 дата публикации

AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM EMPLOYING VIRTUAL MUSICAL INSTRUMENT LIBRARIES FOR PRODUCING NOTES CONTAINED IN THE DIGITAL PIECES OF AUTOMATICALLY COMPOSED MUSIC

Номер: US20200168193A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation system including a system user interface for enabling system users to review and select one or more musical experience descriptors, as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the system user interface, for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user, so as to automatically compose and generate one or more digital pieces of music in response to the musical experience descriptors and time and/or space parameters selected by the system user. Each digital piece of composed and generated music contains a set of musical notes arranged and performed in the digital piece of music. The automated music composition and generation engine includes: a digital piece creation subsystem for creating and delivering the digital piece of music to the system user interface; and a digital audio sample producing subsystem supported by virtual musical instrument libraries for producing digital audio samples of the set of notes contained in the generated digital piece of composed music. 1. An automated music composition and generation system comprising:a system user interface for enabling system users to review and select one or more musical experience descriptors, as well as time and/or space parameters; andan automated music composition and generation engine, operably connected to said system user interface, for receiving, storing and processing said musical experience descriptors and time and/or space parameters selected by the system user, so as to automatically compose and generate one or more digital pieces of music in response to said musical experience descriptors and time and/or space parameters selected by said system user,wherein each said digital piece of composed and generated music contains a set of musical notes arranged and performed in said digital piece of composed music ...

Подробнее
28-05-2020 дата публикации

AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM DRIVEN BY LYRICAL INPUT

Номер: US20200168194A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation process within an automated music composition and generation system driven by lyrics. The process involves the system user accessing said automated music composition and generation system, employing an automated music composition and generation engine having a system user interface. The system user interface is used to provide lyrics to the automated music composition and generation engine for processing by the automated music composition and generation engine. The system user initiates the automated music composition and generation engine to compose and generate music based on lyrics the provided as input. The lyrics are analyzed for vowel formants to generate pitch events, which are used to support the automated music composition process. 1. A method of composing and generating music in an automated manner using a pitch event analyzing subsystem in an automated music composition and generation system , said method comprises the following sequence of steps:(a) providing lyrical input, expressed in typed, spoken or sung format, to said system user interface of said automated music composition and generation system, for one or more scenes in a video or media object to be scored with music composed and generated by said automated music composition and generation system;(b) using said pitch event analyzing subsystem to extract pitch events, rhythmic information and/or prosodic information from the analyzed lyrical input, and coding with timing information on when such detected events occurred;(c) encoding extracting pitch events, rhythmic information and/or prosodic information to precisely indicate when such detected events occurred along a time line; and(d) providing the extracted pitch event, rhythmic and prosodic information to said automated music composition and generation system for use in constraining the system operating parameters employed in said function-specific subsystems of said automated music composition ...

Подробнее
28-05-2020 дата публикации

AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEMS, ENGINES AND METHODS EMPLOYING PARAMETER MAPPING CONFIGURATIONS TO ENABLE AUTOMATED MUSIC COMPOSITION AND GENERATION

Номер: US20200168195A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation system includes a graphical user interface (GUI) based system user interface for enabling system users to review and select one or more musical experience descriptors as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the GUI-based system user interface, for receiving, storing and processing the musical experience descriptors and time and/or space parameters, and composing and generating digital pieces of music, each containing a set of musical notes arranged and performed in the digital piece of composed music. A system network and methods are provided for designing and developing parameter mapping configurations (SMCs) used in the automated music composition and generation engine so as to enable the automated music composition and generation engine to automatically compose and generate music in response to musical experience descriptors and time and/or space parameters provided as input to the system. 112-. (canceled)13. A system network for configuring an automated music composition and generation engine having a parameter configuration mode and an automated music composition and generation mode , said system network comprising:one or more remote system designer client workstations, operably connected to an automated music composition and generation engine having a parameter configuration mode and an automated music composition and generation mode, and a system user interface allowing a system user to provide emotion-type musical experience descriptors, style-type musical experience descriptors and timing and/or space parameters, as input to said automated music composition for storage and processing;wherein each workstation client system supports a GUI-based work environment for creating and managing parameter mapping configurations (PMC) within said automated music composition and generation engine;wherein, during said parameter configuration mode ...

Подробнее
28-05-2020 дата публикации

METHOD OF SCORING DIGITAL MEDIA OBJECTS USING MUSICAL EXPERIENCE DESCRIPTORS TO INDICATE WHAT, WHERE AND WHEN MUSICAL EVENTS SHOULD APPEAR IN PIECES OF DIGITAL MUSIC AUTOMATICALLY COMPOSED AND GENERATED BY AN AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM

Номер: US20200168196A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation system having a system user interface operably connected to an automated music composition and generation engine, and supporting a method of scoring a selected media object with one or more pieces of digital music. The method uses the system user interface to select one or more musical experience descriptors and then apply the selected musical experience descriptors to the selected digital media object to indicate what, when and how particular musical events should occur in the one or more pieces of digital music to be automatically composed and generated by the automated music composition and generation engine. The generated piece of digital music is then used in musically scoring the selected digital media object. 1. A method of scoring a digital media object using an automated music composition and generation system being supplied with musical experience descriptors to characterize one or more pieces of digital music to be automatically composed and generated by said automated music composition and generation system , for use in musically scoring said digital media object , said method comprising the steps of:(a) selecting a digital media object to be scored with one or more pieces of digital music automatically generated by said automated music composition and generation system having a system user interface operably connected to an automated music composition and generation engine;(b) providing said selected digital media object to said system user interface;(c) using said system user interface to select one or more musical experience descriptors and then apply the selected musical experience descriptors to said selected digital media object so as to indicate what, when and how particular musical events should occur in said one or more pieces of digital music to be automatically composed and generated by said automated music composition and generation system, for use in musically scoring said selected digital media ...

Подробнее
05-07-2018 дата публикации

Machine Learning to Generate Music from Text

Номер: US20180190249A1
Принадлежит:

The present disclosure provides systems and methods that leverage one or more machine-learned models to generate music from text. In particular, a computing system can include a music generation model that is operable to extract one or more structural features from an input text. The one or more structural features can be indicative of a structure associated with the input text. The music generation model can generate a musical composition from the input text based at least in part on the one or more structural features. For example, the music generation model can generate a musical composition that exhibits a musical structure that mimics or otherwise corresponds to the structure associated with the input text. For example, the music generation model can include a machine-learned audio generation model. In such fashion, the systems and methods of the present disclosure can generate music that exhibits a globally consistent theme and/or structure. 1. A computer system to generate music from text , the system comprising:a feature extractor configured to extract one or more structural features from an input text, wherein the one or more structural features are indicative of a structure associated with the input text; anda machine-learned audio generation model configured to obtain the one or more structural features from the feature extractor and generate a musical composition from the input text based at least in part on the one or more structural features;one or more processors; and obtain the input text;', 'input the input text into the feature extractor;', 'receive the one or more structural features as an output of the feature extractor;', 'input the one or more structural features into the machine-learned audio generation model; and', 'receive data descriptive of the musical composition as an output of the machine-learned audio generation model., 'one or more non-transitory computer-readable media that collectively store instructions that, when executed by the ...

Подробнее
20-07-2017 дата публикации

COGNITIVE MUSIC ENGINE USING UNSUPERVISED LEARNING

Номер: US20170206875A1
Принадлежит:

A method for generating a musical composition based on user input is described. A first set of musical characteristics from a first input musical piece is received as an input vector. The first set of musical characteristics is perturbed to create a perturbed input vector as input in a first set of nodes in a first visible layer of an unsupervised neural net. The unsupervised neural net comprised of a plurality of computing layers, each computing layer composed of a respective set of nodes. The unsupervised neural net is operated to calculate an output vector from a higher level hidden layer in the unsupervised neural net. The output vector is used to create an output musical piece. 1. A method for generating a musical composition , comprising:receiving a first set of musical characteristics from a first input musical piece as an input vector;perturbing the first set of musical characteristics to create a perturbed input vector as input in a first set of nodes in a first visible layer of an unsupervised neural net, the unsupervised neural net comprised of a plurality of computing layers, each computing layer composed of a respective set of nodes;operating the unsupervised neural net to calculate an output vector from a higher level hidden layer in the unsupervised neural net; andusing the output vector to create an output musical piece.2. The method as recited in claim 1 , further comprising receiving an expressed user intent claim 1 , wherein the perturbing is performed by inserting values into a set of perturbation nodes in the first visible layer according to a rule selected according to the expressed user intent.3. The method as recited in claim 2 , wherein in response to an expressed user intent claim 2 , pitches having an interval from a note in the input piece are inserted into the set of perturbation nodes in the first visible layer.4. The method as recited in claim 2 , further comprising:receiving a user input indicating a degree of similarity for the ...

Подробнее
05-08-2021 дата публикации

Systems, devices, and methods for musical catalog amplification services

Номер: US20210241402A1
Принадлежит: Obeebo Labs Ltd

Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).

Подробнее
12-08-2021 дата публикации

Audio Techniques for Music Content Generation

Номер: US20210247954A1
Принадлежит: Aimi Inc

Techniques are disclosed relating to implementing audio techniques for real-time audio generation. For example, a music generator system may generate new music content from playback music content based on different parameter representations of an audio signal. In some cases, an audio signal can be represented by both a graph of the signal (e.g., an audio signal graph) relative to time and a graph of the signal relative to beats (e.g., a signal graph). The signal graph is invariant to tempo, which allows for tempo invariant modification of audio parameters of the music content in addition to tempo variant modifications based on the audio signal graph.

Подробнее
01-08-2019 дата публикации

METHOD OF AND SYSTEM FOR CONTROLLING THE QUALITIES OF MUSICAL ENERGY EMBODIED IN AND EXPRESSED BY DIGITAL MUSIC TO BE AUTOMATICALLY COMPOSED AND GENERATED BY AN AUTOMATED MUSIC COMPOSITION AND GENERATION ENGINE

Номер: US20190237051A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation system and process for producing one or more pieces of digital music, by providing a set of musical energy (ME) quality control parameters to an automated music composition and generation engine, applying certain of the selected musical energy quality control parameters as markers to specific spots along the timeline of a selected media object or event marker by the system user during a scoring process, and providing the selected set of musical energy quality control parameters to drive the automated music composition and generation engine to automatically compose and generate one or more pieces of digital music with control over the specified qualities of musical energy embodied in and expressed by the piece of digital music to composed and generated by the automated music composition and generation engine. 1. An automated music composition and generation system for composing and generating pieces of digital music in response to a system user providing , as input , musical energy (ME) quality control parameters , said automated music composition and generation system comprising:a system user interface subsystem supporting spotting media objects and timeline-based event markers, and employing a graphical user interface (GUI) for supporting the selection of musical energy (ME) quality control parameters including (i) emotion/mood and style/genre type musical experience descriptors (MXDs), and timing parameters, and (ii) one or more musical energy quality (ME) control parameters selected from the group consisting of instrumentation, ensemble, volume, tempo, rhythm, harmony, and timing (e.g. start/hit/stop) and framing (e.g. intro, climax, outro or ICO), andwherein said musical energy quality control parameters are applied along the timeline of a graphical representation of a selected media object or timeline-based event marker, so as to control particular musical energy qualities within the piece of digital music being ...

Подробнее
09-09-2021 дата публикации

AUTOMATIC ISOLATION OF MULTIPLE INSTRUMENTS FROM MUSICAL MIXTURES

Номер: US20210279588A1
Принадлежит: SPOTIFY AB

A system, method and computer product for training a neural network system. The method comprises inputting an audio signal to the system to generate plural outputs f(X, Θ). The audio signal includes one or more of vocal content and/or musical instrument content, and each output f(X, Θ) corresponds to a respective one of the different content types. The method also comprises comparing individual outputs f(X, Θ) of the neural network system to corresponding target signals. For each compared output f(X, Θ), at least one parameter of the system is adjusted to reduce a result of the comparing performed for the output f(X, Θ), to train the system to estimate the different content types. In one example embodiment, the system comprises a U-Net architecture. After training, the system can estimate various different types of vocal and/or instrument components of an audio signal, depending on which type of component(s) the system is trained to estimate. 1. A method for estimating a component of a provided audio signal , comprising:converting the provided audio signal to an image;inputting the image to a U-Net trained to estimate different types of content, the different types of content including one or more of vocal content and musical instrument content, wherein, in response to the input image, the U-Net outputs signals, each representing a corresponding one of the different types of content; andconverting each of the signals output by the U-Net to an audio signal.220.-. (canceled) This application is a continuation of U.S. application Ser. No. 16/521,756, filed Jul. 25, 2019, which is continuation-in-part of each of (1) U.S. application Ser. No. 16/055,870, filed Aug. 6, 2018, entitled “SINGING VOICE SEPARATION WITH DEEP U-NET CONVOLUTIONAL NETWORKS”, (2) U.S. application Ser. No. 16/242,525, filed Jan. 8, 2019, entitled “SINGING VOICE SEPARATION WITH DEEP U-NET CONVOLUTIONAL NETWORKS”, and (3) U.S. application Ser. No. 16/165,498, filed Oct. 19, 2018, entitled “SINGING ...

Подробнее
15-08-2019 дата публикации

GENERATING AUDIO USING NEURAL NETWORKS

Номер: US20190251987A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step. 1. A neural network system implemented by one or more computers , wherein the neural network system is configured to autoregressively generate an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps , wherein the output sequence of audio data is a verbalization of a text segment , and [ receive: (i) a current sequence of audio data that comprises the respective audio sample at each time step that precedes the time step in the output sequence, and (ii) features of the text segment, and', 'process the current sequence of audio data and the features of the text segment to generate an alternative representation for the time step; and, 'a convolutional subnetwork comprising one or more audio-processing convolutional neural network layers, wherein the convolutional subnetwork is configured to, for each of the plurality of time steps, receive the alternative representation for the time step, and', 'process the alternative representation for the time step to generate an output that ...

Подробнее
14-09-2017 дата публикации

AUTONOMOUS MUSIC COMPOSITION AND PERFORMANCE SYSTEMS AND DEVICES

Номер: US20170263226A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An autonomous music composition and performance system employing an automated music composition and generation engine configured to receive musical signals from a set of a real or synthetic musical instruments being played by a group of human musicians. The system buffers and analyzes musical signals from the set of real or synthetic musical instruments, composes and generates music in real-time that augments the music being played by the band of musicians, and/or records, analyzes and composes music recorded for subsequent playback, review and consideration by the human musicians. 1. An autonomous music composition and performance system comprising:an automated music composition and generation engine configured to(i) receive musical signals from a set of a real or synthetic musical instruments being played by a group of human musicians,(ii) buffer and analyze said musical signals from said set of real or synthetic musical instruments,(iii) compose and generate music in real-time that augments the music being played by the band of musicians, or(iv) record, analyze and compose music recorded for subsequent playback, review and consideration by said human musicians;a transportable housing containing said automated music composition and generation engine and further includinga touch-type display screen for selecting graphical icons and reviewing graphical information;a first set of audio signal input connectors for receiving electrical signals produced from said set of musical instruments;a second set of audio signal input connectors for receiving electrical signals produced from one or more microphones;a set of MIDI signal input connectors for receiving MIDI input signals from the set of instruments in the system environment; andaudio output signal connector for delivering audio output signals to audio signal preamplifiers and/or amplifiers;wherein said automated music composition and generation engine is configured to (i) receive musical signals from said set of a ...

Подробнее
14-09-2017 дата публикации

AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM DRIVEN BY EMOTION-TYPE AND STYLE-TYPE MUSICAL EXPERIENCE DESCRIPTORS

Номер: US20170263227A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation system for automatically composing and generating digital pieces of music using an automated music composition and generation engine driven by a set of emotion-type and style-type musical experience descriptors and time and/or space parameters supplied by a system user during an automated music composition and generation process. The system includes a system user interface allowing a system user to input (i) linguistic and/or graphical icon based musical experience descriptors, and (ii) a video, audio-recording, image, slide-show, or event marker, as input through the system user interface. 1. An automated music composition and generation system comprising:a system user interface for enabling system users to create a project for a digital piece of music to be composed and generated, and review and select one or more emotion-type musical experience descriptors, one or more style-type musical experience descriptors, as well as time and/or space parameters; andan automated music composition and generation engine, operably connected to said system user interface, for receiving, storing and processing said emotion-type and style-type musical experience descriptors and time and/or space parameters selected by the system user;wherein said automatic music composition and generation engine including a plurality of function-specific subsystems cooperating together to automatically compose and generate one or more digital pieces of music in response to said emotion-type and style-type musical experience descriptors and time and/or space parameters selected by said system user;wherein said digital piece of music composed and generated has a rhythmic landscape and a pitch landscape and contains a set of musical notes arranged and performed using an orchestration of one or more musical instruments selected for the digital piece of music;wherein said plurality of function-specific subsystems include a rhythmic landscape subsystem, a ...

Подробнее
22-08-2019 дата публикации

SYSTEMS AND METHODS FOR CAPTURING AND INTERPRETING AUDIO

Номер: US20190259361A1
Принадлежит:

A device is provided as part of a system, the device being for capturing vibrations produced by an object such as a musical instrument. Via a fixation element, the device is fixed to a drum. The device has a sensor spaced apart from a surface of the drum, located relative to the drum, and a magnet adjacent the sensor. The fixation element transmits vibrations from its fixation point on the drum to the magnet. Vibrations from the surface of the drum and from the magnet are transmitted to the sensor. A method may further be provided for interpreting an audio input, such as the output of the sensors within the system, the method comprising identifying an audio event or grouping of audio events within audio data, generating a model of the audio event that includes a representation of a timbre characteristic, and comparing that representation to expected representations. 1. A device for capturing vibrations produced by an object , the device comprising:a fixation element for fixing the device to an object;a first sensor for detecting vibration of the object at the fixation element; anda second sensor spaced apart from a surface of the object and located relative to the object.2. The device of wherein the object is a musical instrument.3. The device of wherein the second sensor is an optical sensor.4. The device of wherein the optical sensor is fixed relative to a visible target on a surface of the musical instrument.5. The device of wherein the musical instrument is a drum claim 1 , and the fixation element transmits vibrations from a drum rim to the first sensor.6. A system for capturing vibrations produced by an object claim 1 , the system comprising: a fixation element for fixing the device to an object;', 'a first sensor for detecting vibration of the object at the fixation element; and', 'a second sensor spaced apart from a surface of the object and located relative to the object, and, 'a device for capturing vibrations produced by an object, the device comprisinga ...

Подробнее
11-11-2021 дата публикации

Learning progression for intelligence based music generation and creation

Номер: US20210350776A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An artificial intelligence (AI) method includes generating a first musical interaction behavioral model. The first musical interaction behavioral model causes an interactive electronic device to perform a first set of musical operations and a first set of motional operations. The AI method further includes receiving user inputs received in response to the performance of the first set of musical operations and the first set of motional operations and determining a user learning progression level based on the user inputs. In response to determining that the user learning progression level is above a threshold, the AI method includes generating a second musical interaction behavioral model. The second musical interaction behavioral model causes the interactive electronic device to perform a second set of musical operations and a second set of motional operations. The AI method further includes performing the second set of musical operations and the second set of motional operations.

Подробнее
29-08-2019 дата публикации

Chord Identification Method and Chord Identification Apparatus

Номер: US20190266988A1
Автор: Kouhei SUMI
Принадлежит: Yamaha Corp

A chord identification method selects from among a plurality of chord identifiers a chord identifier that corresponds to an attribute of a piece of music represented by an audio signal, where the plurality of chord identifiers corresponds to respective ones of a plurality of attributes relating to pieces of music; and identifies a chord for the audio signal by applying a feature amount of the audio signal to the selected chord identifier.

Подробнее
27-09-2018 дата публикации

MODELING OF THE LATENT EMBEDDING OF MUSIC USING DEEP NEURAL NETWORK

Номер: US20180276540A1
Автор: Xing Zhou
Принадлежит:

Methods and systems are provided for detecting and cataloging qualities in music. While both the data volume and heterogeneity of the digital music content is huge, it has become increasingly important and convenient to build a recommendation or search system to facilitate surfacing these content to the user or consumer community. Embodiments use deep convolutional neural network to imitate how human brain processes hierarchical structures in the auditory signals, such as music, speech, etc., at various timescales. This approach can be used to discover the latent factor models of the music based upon acoustic hyper-images that are extracted from the raw audio waves of music. These latent embeddings can be used either as features to feed to subsequent models, such as collaborative filtering, or to build similarity metrics between songs, or to classify music based on the labels for training such as genre, mood, sentiment, etc. 1. A method of estimating song features , the method comprising:an audio receiver receiving a first training audio file;generating, with one or more processors, a first waveform associated with the first training audio file;generating, with the one or more processors, one or more frequency transformations from the first waveform;generating, with the one or more processors, a hyper-image from the one or more frequency transformations;processing, with a convolutional neural network, the hyper-image;estimating, with the one or more processors, an error in an output of the convolutional neural network;optimizing, with the one or more processors, one or more weights associated with the convolutional neural network based on the estimated error; andusing the convolutional neural network to estimate a feature of a testing audio file.2. The method of claim 1 , wherein the one or more frequency transformations include one or more of a linear-frequency power spectrum claim 1 , a log-frequency power spectrum claim 1 , a constant-Q power spectrum claim 1 , a ...

Подробнее
12-09-2019 дата публикации

METHOD OF AND SYSTEM FOR SPOTTING DIGITAL MEDIA OBJECTS AND EVENT MARKERS USING MUSICAL EXPERIENCE DESCRIPTORS TO CHARACTERIZE DIGITAL MUSIC TO BE AUTOMATICALLY COMPOSED AND GENERATED BY AN AUTOMATED MUSIC COMPOSITION AND GENERATION ENGINE

Номер: US20190279606A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation system and process for scoring a selected media object or event marker, with one or more pieces of digital music, by spotting the selected media object or event marker with musical experience descriptors selected and applied to the selected media object or event marker by the system user during a scoring process, and using said selected musical experience descriptors to drive an automated music composition and generation engine to automatically compose and generate the one or more pieces of digital music. 123-. (canceled)24. An automated music composition and generation system for spotting a digital media object or event marker with one or more musical experience descriptors during a scoring process using musical experience descriptors to characterize one or more pieces of digital music to be automatically composed and generated by an automated music composition and generation engine , for use in scoring said digital media object or event marker , said automated music composition and generation system comprising:an automated music composition and generation engine configured for receiving, as inputs, musical experience descriptors being selected by a system user while spotting a digital media object or event marker during a scoring process, and producing, as output, one or more pieces of digital music automatically composed and generated by said automated music composition and generation engine based on said selected musical experience descriptors supplied to said automated music composition and generation engine; anda system user interface, operably connected to said automated music composition and generation engine, and configured for (i) selecting a digital media object or event marker to be scored by the system user with pieces of digital music automatically composed and generated by said automated music composition and generation engine, (ii) selecting said musical experience descriptors from a group consisting of ...

Подробнее
11-10-2018 дата публикации

AUDIO INFORMATION PROCESSING METHOD AND APPARATUS

Номер: US20180293969A1
Автор: ZHAO Weifeng

An audio information processing method and apparatus are provided. The method includes decoding a first audio file to acquire a first audio subfile corresponding to a first sound channel and a second audio subfile corresponding to a second sound channel; extracting first audio data from the first audio subfile; extracting second audio data from the second audio subfile; acquiring a first audio energy value of the first audio data; acquiring a second audio energy value of the second audio data; and determining an attribute of at least one of the first sound channel and the second sound channel based on the first audio energy value and the second audio energy value. 120-. (canceled)21. A method comprising:decoding a first audio file to acquire a first audio subfile corresponding to a first sound channel and a second audio subfile corresponding to a second sound channel;extracting first audio data from the first audio subfile;extracting second audio data from the second audio subfile;acquiring a first audio energy value of the first audio data;acquiring a second audio energy value of the second audio data; anddetermining an attribute of at least one of the first sound channel and the second sound channel based on the first audio energy value and the second audio energy value.22. The method according to claim 21 , further comprising:extracting frequency spectrum features of a plurality of second audio files, respectively; andtraining the frequency spectrum features by using an error back propagation (BP) algorithm to obtain a deep neural networks (DNN) model,wherein the first audio data is extracted from the first audio subfile by using the DNN model,wherein the second audio data is extracted from the second audio subfile by using the DNN model.23. The method according to claim 21 , wherein the determining the attribute includes:determining a difference value between the first audio energy value and the second audio energy value;determining the attribute of the first ...

Подробнее
17-09-2020 дата публикации

Voice synthesis method, voice synthesis apparatus, and recording medium

Номер: US20200294486A1
Принадлежит: Yamaha Corp

A voice synthesis method includes: supplying a first trained model with control data including phonetic identifier data to generate a series of frequency spectra of harmonic components; supplying a second trained model with the control data to generate a waveform signal representative of non-harmonic components; and generating a voice signal including the harmonic components and the non-harmonic components based on the series of frequency spectra of the harmonic components generated by the first trained model and the waveform signal representative of the non-harmonic components generated by the second trained model.

Подробнее
03-10-2019 дата публикации

AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM EMPLOYING AN INSTRUMENT SELECTOR FOR AUTOMATICALLY SELECTING VIRTUAL INSTRUMENTS FROM A LIBRARY OF VIRTUAL INSTRUMENTS TO PERFORM THE NOTES OF THE COMPOSED PIECE OF DIGITAL MUSIC

Номер: US20190304418A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation system for automatically composing and generating digital pieces of music using an automated music composition and generation engine driven by a set of emotion-type and style-type musical experience descriptors and time and/or space parameters provided by a system user. The automated music composition and generation engine includes an instrument subsystem supporting a library of virtual instruments, wherein each virtual instrument is capable of performing one or more notes of at least a portion of the composed piece of music, in response to the emotion-type and/or style-type musical experience descriptors; an instrument selector subsystem for automatically selecting one or more of virtual instruments from the library, so that each selected virtual instrument performs one or more notes of at least a portion of the composed piece of music; and a digital piece creation subsystem for creating the digital piece of composed music by assembling the notes produced from the virtual instruments selected from the library. 1.: An automated music composition and generation system for automatically composing and generating digital pieces of music using an automated music composition and generation engine driven by a set of emotion-type and style-type musical experience descriptors and time and/or space parameters provided by a system user during an automated music composition and generation process , said automated music composition and generation system comprising:a system user interface for enabling system users to provide one or more emotion-type musical experience descriptors, one or more style-type musical experience descriptors, as well as time and/or space parameters, as input to said system user interface; andan automated music composition and generation engine, operably connected to said system user interface, for receiving said emotion-type and style-type musical experience descriptors and time and/or space parameters ...

Подробнее
03-10-2019 дата публикации

COGNITIVE MUSIC ENGINE USING UNSUPERVISED LEARNING

Номер: US20190304419A1
Принадлежит:

A method for generating a musical composition based on user input is described. A first set of musical characteristics from a first input musical piece is received as an input vector. The first set of musical characteristics is perturbed to create a perturbed input vector as input in a first set of nodes in a first visible layer of an unsupervised neural net. The unsupervised neural net comprised of a plurality of computing layers, each computing layer composed of a respective set of nodes. The unsupervised neural net is operated to calculate an output vector from a higher level hidden layer in the unsupervised neural net. The output vector is used to create an output musical piece. 1. A method for generating a musical composition , comprising:receiving a first set of musical characteristics from a first input musical piece as an input vector;perturbing the first set of musical characteristics to create a perturbed input vector as input in a first set of nodes in a first visible layer of an unsupervised neural net, the unsupervised neural net comprised of a plurality of computing layers, each computing layer composed of a respective set of nodes;operating the unsupervised neural net to calculate an output vector from a higher level hidden layer in the unsupervised neural net; andusing the output vector to create an output musical piece, wherein the output musical piece is a different musical piece than the first input musical piece and the unsupervised neural net operates in an unsupervised manner to calculate the output vector.2. The method as recited in claim 1 , further comprising receiving an expressed user intent claim 1 , wherein the perturbing is performed by inserting values into a set of perturbation nodes in the first visible layer according to a rule selected according to the expressed user intent.3. The method as recited in claim 2 , wherein in response to an expressed user intent claim 2 , pitches having an interval from a note in the input piece are ...

Подробнее
16-11-2017 дата публикации

METHOD AND SYSTEM FOR CREATING AN AUDIO COMPOSITION

Номер: US20170330544A1
Принадлежит:

The present disclosure relates to a method for determining a respective stem volume for each respective stem of an audio composition including a plurality of stems. A first value for each of one or more parameters is received for a first segment of the audio composition. For each respective stem of the audio composition, a first respective stem volume is determined for the first segment of the audio composition based on the one or more parameters and a volume control layer. 1. A method for determining a respective stem volume for each respective stem of an audio composition including a plurality of stems , the method including:receiving a first value for each of one or more parameters for a first segment of the audio composition; andfor each respective stem of the audio composition, determining a first respective stem volume for the first segment of the audio composition based on the one or more parameters and a volume control layer.2. A method as claimed in claim 1 , further including:playing each respective stem of the audio composition at the first respective stem volume for the first segment.3. A method as claimed in claim 1 , further comprising:receiving a second value for each of one or more parameters for a first segment of the audio composition; andfor each respective stem, determining a second respective stem volume for the first segment based upon the one or more parameters and the volume control layer.4. A method as claimed in claim 1 , wherein claim 1 , for each respective stem of the audio composition claim 1 , determining a first respective stem volume of the audio composition based on the one or more parameters and a volume control layer further includes:filtering the volume control layer corresponding to the respective stem using the one or more parameters to identify a filtered volume control layer; andcalculating the first respective stem volume for the respective stem using the one or more parameters and the filtered volume control layer.5. A ...

Подробнее
07-11-2019 дата публикации

Utilizing Athletic Activities to Augment Audible Compositions

Номер: US20190339931A1
Принадлежит: Nike Inc

Example embodiments relate to methods and systems for playback of adaptive music corresponding to an athletic activity. A user input is received from a user selecting an existing song for audible playback to the user, the song comprising a plurality of audio layers including at least a first layer, a second layer, and a third layer. Augmented playback of the existing song to the user is initiated by audibly providing the first layer but not the second layer. Physical activity information derived from a sensor corresponding to a real-time physical activity level of a user is received. If the physical activity level of the user is above a first activity level threshold, the augmented playback of the existing song is continued by audibly providing the first layer and the second layer to the user.

Подробнее
26-12-2019 дата публикации

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

Номер: US20190392798A1
Принадлежит: CASIO COMPUTER CO., LTD.

An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of the acoustic feature data output by the trained acoustic model. 1. An electronic musical instrument comprising:a plurality of operation elements respectively corresponding to mutually different pitch data;a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data including training lyric data and training pitch data, and on training singing voice data of a singer corresponding to the training musical score data, the trained acoustic model being configured to receive lyric data and pitch data and output acoustic feature data of a singing voice of the singer in response to the received lyric data and pitch data; andat least one processor, in accordance with a user operation on an operation element in the plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, and', 'digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the ...

Подробнее
26-12-2019 дата публикации

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

Номер: US20190392799A1
Принадлежит: CASIO COMPUTER CO., LTD.

An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of at least a portion of acoustic feature data output by the trained acoustic model, and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element. 1. An electronic musical instrument comprising:a plurality of operation elements respectively corresponding to mutually different pitch data;a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data including training lyric data and training pitch data, and on training singing voice data of a singer corresponding to the training musical score data, the trained acoustic model being configured to receive lyric data and pitch data and output acoustic feature data of a singing voice of the singer in response to the received lyric data and pitch data; andat least one processor, in accordance with a user operation on an operation element in the plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, and', 'digitally synthesizes and outputs inferred singing voice data that infers a singing ...

Подробнее
31-12-2020 дата публикации

GENERATING AUDIO USING NEURAL NETWORKS

Номер: US20200411032A1
Принадлежит:

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step. 1. A method for training a neural network system having a plurality of parameters , the method comprising:obtaining a training sequence of audio data that comprises a respective audio sample at each of a plurality of time steps; receive a current sequence of audio data that comprises the respective audio sample at each time step that precedes the time step in the training sequence of audio data, and', 'process the current sequence of audio data to generate an alternative representation for the time step;, 'processing the training sequence of audio data using a convolutional subnetwork of the neural network system comprising one or more audio-processing convolutional neural network layers, wherein the convolutional subnetwork is configured to, for each of the plurality of time steps receive the alternative representation for the time step, and', 'process the alternative representation for the time step to generate an output that defines a score distribution over a plurality of possible audio samples for the time step; and, ' ...

Подробнее
17-11-2022 дата публикации

METHOD AND ELECTRONIC DEVICE FOR RECOGNIZING SONG, AND STORAGE MEDIUM

Номер: US20220366880A1
Автор: KONG Lingcheng
Принадлежит:

A method for recognizing a song, including: acquiring a target song segment and transforming the target song segment to generate a corresponding first spectrum map; generating a multi-dimensional first feature vector according to the first spectrum map and a preset neural network model; acquiring second feature vectors of pre-stored songs, wherein one pre-stored song is divided into a plurality of pre-stored song segments, one pre-stored song segment corresponds to one second feature vector, and the first feature vector and the second feature vectors have the same number of dimensions; calculating similarities between the first feature vector and the second feature vectors, and determining a maximum similarity; and determining that the target song segment and a pre-stored song corresponding to the maximum similarity are different versions of the same song in response to the maximum similarity being greater than a preset threshold.

Подробнее
22-12-2022 дата публикации

Information processing apparatus, information processing method, and information processing program

Номер: US20220406280A1
Автор: Haruhiko Kishi
Принадлежит: Sony Group Corp

An information processing apparatus according to the present disclosure includes: a storage unit that stores a plurality of pieces of music feature information in which a plurality of types of feature amounts extracted from music information is associated with predetermined identification information, the music feature information being used as learning data in composition processing using machine learning; a reception unit that receives instruction information transmitted from a terminal apparatus; an extraction unit that extracts the music feature information from the storage unit according to the instruction information; and an output unit that outputs presentation information of the music feature information extracted by the extraction unit.

Подробнее
29-12-2022 дата публикации

Mobile App riteTune to provide music instrument players instant feedback on note pitch and rhythms accuracy based on sheet music

Номер: US20220415289A1
Автор: Daniel Cheng, Steve Cheng
Принадлежит: Individual

A tool is needed for music instrument learners to get feedbacks on the correctness of their performances of a particular piece of music. The invention disclosed here is such a tool that can provide music instrument players instant feedback on note pitch and rhythms accuracy based on sheet music. This is accomplished through audio signal processing, sheet music image processing, and conversion of both analogue images and audio signals into standard digital music representation so a comparison can be done and hence a feedback can be presented to the player. An advanced feature will allow users to save the data to the cloud and retrieve later for comparison of progress. It also will allow user to participate an online competition with other players of the same piece of music.

Подробнее
26-10-2021 дата публикации

Timbre creation system

Номер: US11158297B2
Принадлежит: International Business Machines Corp

A timbre creation method, system, and computer program product include performing a timbre analysis of a sound from an input source to generate a digital fingerprint of the sound, performing deep learning to create a patch that matches the digital fingerprint, and generating a second patch for a synthesizer which reproduces a timbre that complements the digital fingerprint based on the patch.

Подробнее
18-08-2017 дата публикации

For tracking the method for music score and the modeling method of correlation

Номер: CN107077836A
Принадлежит: Makemusic Co

本发明涉及一种用于跟踪乐谱(10)的方法,包括实时执行的以下步骤:记录(23)由表演者发出的至少一种声音(12),估算(24)至少一个半音阶矢量(Vx),将该半音阶矢量(Vx)与乐谱(10)的理论上的半音阶矢量进行比较(26),将该半音阶矢量(Vx)和在前的半音阶矢量(Vx‑1)之间的转变(Tx)与乐谱(10)的理论转变进行比较(27),根据半音阶矢量(Vx)的比较(26)、转变(Tx)的比较(27)以及在前的演奏位置(Px‑1),估算(28)表演者的演奏位置,根据转变(Tx)的时长(Dx)和参考时长(Dref)之间的比例,在一个合适的时长(Di)内执行记录步骤(23)。

Подробнее
15-07-2020 дата публикации

Speech synthesis method, speech synthesis system and program

Номер: JP6724932B2
Автор: 竜之介 大道
Принадлежит: Yamaha Corp

Подробнее
27-11-2007 дата публикации

Content identifiers triggering corresponding responses through collaborative processing

Номер: US7302574B2
Принадлежит: Digimarc Corp

Fingerprint data derived from audio or other content is used as an identifier. The fingerprint data can be derived from the content. In one embodiment, fingerprint data supplied from two or more sources is aggregated. The aggregated fingerprint data is used to define a set of audio signals. An audio signal from the set of audio signals is selected based on its probability of matching the fingerprint data. Digital watermarks can also be similarly used to define a set of audio signals.

Подробнее
21-12-2018 дата публикации

A kind of method, apparatus, terminal device and medium that music is recommended

Номер: CN109063163A
Автор: 叶浩, 李岩, 王汉杰, 陈波
Принадлежит: Tencent Technology Shenzhen Co Ltd

本申请公开了一种音乐推荐的方法、装置、终端设备和介质,属于计算机技术领域,该方法包括,确定待配乐素材的视觉语义标签,并搜索视觉语义标签匹配的匹配音乐,并根据用户对各匹配音乐的用户鉴赏信息,对各匹配音乐进行排序,以及按照排序结果向用户进行匹配音乐推荐。这样,可以通过视觉语义标签向用户解释音乐推荐的理由,并且对不同用户进行差异化推荐,实现了音乐推荐的个性化推荐服务。

Подробнее
28-02-2020 дата публикации

Sound effect configuration method and device and computer readable storage medium

Номер: CN110853606A
Автор: 顾正明

本申请提供了一种音效配置方法、装置及计算机可读存储介质,首先提取目标音频文件的音频特征信息,然后基于音频特征信息确定目标音频文件的音乐风格,最后根据音乐风格配置目标音频文件的播放音效。通过本申请方案的实施,基于音频特征进行音频分类,并根据分类结果动态应用对应音效,有效降低了音效配置的复杂度,提升了音效配置的效率和适配性,增强了用户的影音听感。

Подробнее
26-08-2021 дата публикации

Patent JPWO2021166745A1

Номер: JPWO2021166745A1
Автор: [UNK]
Принадлежит: [UNK]

Подробнее
08-05-2018 дата публикации

Method and recording medium for automatic composition using artificial neural network

Номер: KR101854706B1
Автор: 정성훈
Принадлежит: 한성대학교 산학협력단

The present invention relates to an automatic music composition method using an artificial neural network and a recording medium therefor that can automatically generate a music of a new melody different from a learning music and that in particular, can output a music having a natural and high musical perfection of an actual composer level by processing the music so as to suit to musicality or music theory. According to the present invention, the automatic music composition method using an artificial neural network comprises: a step of generating time series data by converting a plurality of notes and beats constituting a music to be learned by the artificial neural network (hereinafter referred to as a first learning music) into a numeric form; a step of learning the artificial neural network using the time series data; a step of outputting a new music by the artificial neural network; and a step of post-processing a beat to correct an exceeding bar with a complete bar if there is the bar (hereinafter referred to as an exceeding bar) which exceeds a beat compared with the complete bar in the new music.

Подробнее
28-01-1997 дата публикации

Pitch and playing position detection device and method of stringed instrument

Номер: KR970002841A
Автор:
Принадлежит:

Подробнее
18-05-2022 дата публикации

Code identification method, code identification device and program

Номер: JP7069819B2
Автор: 康平 須見
Принадлежит: Yamaha Corp

Подробнее
19-01-2022 дата публикации

Generate audio using neural networks

Номер: KR102353284B1

복수의 시간 단계들 각각에서 각 오디오 샘플을 포함하는 오디오 데이터의 출력 시퀀스를 생성하기 위한, 컴퓨터 저장 매체상에 인코딩된 컴퓨터 프로그램들을 포함하는 방법들, 시스템들 및 장치가 개시된다. 방법들 중 하나는 상기 복수의 시간 단계들 각각에 대해, 컨볼루션 서브네트워크에 오디오 데이터의 현재 시퀀스를 입력으로서 제공하는 단계 - 상기 현재 시퀀스는 출력 시퀀스에서 시간 단계에 선행하는 각각의 시간 단계에서의 각 오디오 샘플을 포함하고; 상기 컨볼루션 서브네트워크는, 상기 오디오 데이터의 현재 시퀀스를 처리하여 상기 시간 단계에 대한 대체 표현을 생성하도록 구성됨-; 그리고 출력층에 대한 입력으로서 시간 단계에 대한 상기 대체 표현을 제공하는 단계를 포함하며, 상기 출력층은 시간 단계에 대한 복수의 가능한 오디오 샘플들을 통해 스코어 분포를 정의하는 출력을 생성하도록 상기 대체 표현을 처리하도록 구성된다. Methods, systems and apparatus comprising computer programs encoded on a computer storage medium for generating an output sequence of audio data comprising a respective audio sample at each of a plurality of time steps are disclosed. One of the methods comprises, for each of the plurality of time steps, providing as input a current sequence of audio data to a convolutional subnetwork, the current sequence at each time step preceding a time step in the output sequence. contain each audio sample; the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for a time step as an input to an output layer, wherein the output layer processes the replacement representation to produce an output defining a distribution of scores over a plurality of possible audio samples for the time step. is composed

Подробнее
11-12-2015 дата публикации

METHOD FOR TRACKING A MUSICAL PARTITION AND ASSOCIATED MODELING METHOD

Номер: FR3022051A1
Принадлежит: Weezic

La présente invention se rapporte à un procédé de suivi d'une partition musicale (10) comportant les étapes suivantes effectuées en temps réel : enregistrement (23) d'au moins un son (12) émis par un interprète, estimation (24) d'au moins un vecteur chromatique (Vx), comparaison (26) dudit vecteur chromatique (Vx) avec des vecteurs chromatiques théoriques de ladite partition musicale (10), comparaison (27) d'une transition (Tx) entre ledit vecteur chromatique (Vx) et un précédent vecteur chromatique (Vx-1) avec des transitions théoriques de ladite partition musicale (10), et estimation (28) d'une position de travail (Px) de l'interprète en fonction d'une position de travail précédente (Px-1), de la comparaison (26) dudit vecteur chromatique (Vx) et de la comparaison (27) de ladite transition (Tx), l'étape d'enregistrement (23) étant réalisée sur une durée (Di) adaptée en fonction du rapport entre une durée (Dx) de ladite transition (Tx) et une durée de référence (Dref). The present invention relates to a method for tracking a musical score (10) comprising the following steps performed in real time: recording (23) of at least one sound (12) issued by an interpreter, estimate (24) d at least one chromatic vector (Vx), comparing (26) said chromatic vector (Vx) with theoretical chromatic vectors of said musical score (10), comparing (27) a transition (Tx) between said chromatic vector (Vx ) and a previous chromatic vector (Vx-1) with theoretical transitions of said musical score (10), and estimate (28) of a work position (Px) of the performer according to a previous work position (Px-1), the comparison (26) of said chromatic vector (Vx) and the comparison (27) of said transition (Tx), the recording step (23) being performed over a suitable duration (Di) as a function of the ratio between a duration (Dx) of said transition (Tx) and a reference duration this (Dref).

Подробнее
04-01-2022 дата публикации

Driving sound library, apparatus for generating driving sound library and vehicle comprising driving sound library

Номер: KR20220000655A
Автор: 장경진

사용자에게 다양한 테마로 분류되는 주행음을 제공함으로써 사용자의 요구를 만족시킬 수 있는 주행음 라이브러리는, 복수의 카테고리로 분류된 복수의 소스 음원 각각의 주파수 특성과 시간 특성을 분석하는 단계; 상기 주파수 특성 및 시간 특성에 기초하여 상기 복수의 소스 음원 각각에 대응되는 화음을 결정하는 단계; 상기 복수의 소스 음원 각각에 대응되는 화음을 적용하여 상기 복수의 소스 음원 각각을 변조시킨 복수의 변조 음원을 생성하는 단계; 상기 복수의 소스 음원과 상기 복수의 변조 음원을 입력 데이터로 이용하여 복수의 주행음 음원을 생성하는 단계; 상기 복수의 주행음 음원과 미리 설정된 범위의 엔진 RPM에 대응되는 복수의 엔진음 차수에 기초하여 상기 복수의 주행음 음원의 피치를 변경하고 복수의 주행음을 생성하는 단계; 상기 복수의 주행음에 대한 테마별 스코어를 입력 받는 단계; 및 상기 복수의 주행음과 상기 복수의 주행음 각각의 테마를 연동하여 저장하는 단계;를 포함하는 방법으로 생성된다.

Подробнее
04-06-2021 дата публикации

Method and system for identifying notes played on a wind musical instrument

Номер: FR3103952A1

L’invention propose une méthode et un dispositif d’identification d’une configuration de bouchages des trous latéraux d’un instrument de musique à vent. Elle se base sur une phase d’apprentissage et une phase de classification à l’aide d’algorithmes d’apprentissage supervisé ou d’algorithmes d’analyse discriminante prédictive. L’invention permet d’adapter le temps d’exécution de la méthode pour obtenir un temps de réponse compatible des exigences du temps réel. Elle permet également de résoudre les problèmes de dérives dans le temps lorsque l’apprentissage initial des notes est modifié du fait de l’influence de paramètres tels que la température ou l’humidité ou encore la manipulation de l’instrument. Figure 3 The invention provides a method and a device for identifying a plugging configuration of the side holes of a wind musical instrument. It is based on a learning phase and a classification phase using supervised learning algorithms or predictive discriminant analysis algorithms. The invention makes it possible to adapt the execution time of the method to obtain a response time compatible with real-time requirements. It also solves the problems of drifts over time when the initial learning of notes is modified due to the influence of parameters such as temperature or humidity or the handling of the instrument. Figure 3

Подробнее
28-04-2023 дата публикации

Method and system for identifying notes played on a wind musical instrument

Номер: FR3103952B1

L’invention propose une méthode et un dispositif d’identification d’une configuration de bouchages des trous latéraux d’un instrument de musique à vent. Elle se base sur une phase d’apprentissage et une phase de classification à l’aide d’algorithmes d’apprentissage supervisé ou d’algorithmes d’analyse discriminante prédictive. L’invention permet d’adapter le temps d’exécution de la méthode pour obtenir un temps de réponse compatible des exigences du temps réel. Elle permet également de résoudre les problèmes de dérives dans le temps lorsque l’apprentissage initial des notes est modifié du fait de l’influence de paramètres tels que la température ou l’humidité ou encore la manipulation de l’instrument. Figure 3

Подробнее
12-02-2020 дата публикации

Automatic isolation of multiple instruments from musical mixtures

Номер: EP3608902A1
Принадлежит: SPOTIFY AB

A method for estimating a component of a provided audio signal, comprising converting the provided audio signal to an image, inputting the image to a U-Net trained to estimate different types of content, the different types of content including one or more of vocal content and musical instrument content, wherein, in response to the input image, the U-Net outputs signals, each representing a corresponding one of the different types of content, and converting each of the signals output by the U-Net to an audio signal.

Подробнее
04-01-2019 дата публикации

A kind of range balance method, apparatus and system based on deep learning

Номер: CN109147807A
Автор: 卢峰, 喻浩文, 姚青山, 秦宇
Принадлежит: Anker Innovations Co Ltd

本发明提供了发明提供了一种基于深度学习的音域平衡方法、装置、及系统,所述方法包括:对音频数据进行特征提取得到音频数据特征;基于所述音频数据特征,利用训练好的音域平衡模型,生成所述待处理的音频数据的推荐音域平衡结果。本发明基于深层神经网络和无监督深度学习,解决无分类标签音乐和未知风格音乐的音域平衡的问题,并结合对用户偏好的统计,实现更合理的多类别音域平衡设计,满足个性化需求。

Подробнее
20-03-2003 дата публикации

Using a system for prediction of musical preferences for the distribution of musical content over cellular networks

Номер: US20030055516A1
Автор: Dan Gang, Daniel Lehmann
Принадлежит: Individual

A system and a methond for predicting the musical taste and/or preferences of the user and its integration into services provided by a wireless network provider Although the present application is directed toward implementations with wireless providers, the present invention can also be implemented on a regular, i.e., wireless, network. The core of the present invention is a system capable of predicting whether a given user, i.e., customer, likes or does not like a specific song from a pre-analyzed catalog Once such a prediction has been performed, those items that are predicted to be liked best by the user may be forwarded to the mobile device of the user on the cellular (or other wireless) network. The system maintains a database containing properiety information about the songs in the catalog and, most important, a description (profile) of the musical taste of each of its customers, identified by their cellular telephone number

Подробнее
12-07-2022 дата публикации

Generating audio using neural networks

Номер: US11386914B2
Принадлежит: DeepMind Technologies Ltd

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.

Подробнее
15-06-2021 дата публикации

Autonomous music composition and performance system employing real-time analysis of a musical performance to automatically compose and perform music to accompany the musical performance

Номер: US11037539B2
Автор: Andrew H. Silverstein
Принадлежит: Shutterstock Inc

An autonomous music composition and performance system employing an automated music composition and generation engine configured to receive musical signals from a set of a real or synthetic musical instruments being played by a group of human musicians. The system buffers and analyzes musical signals from the set of real or synthetic musical instruments, composes and generates music in real-time that augments the music being played by the band of musicians, and/or composes and generates music for subsequent playback, review and consideration by the human musicians.

Подробнее
22-01-2019 дата публикации

Audio cadence detection method, device and storage medium

Номер: CN109256147A
Автор: 王征韬

本发明公开了一种音频节拍检测方法、装置及存储介质,所述方法包括:获取训练样本,并对所述训练样本进行特征提取,以提取出所述训练样本的音频特征,再将所述训练样本的音频特征输入参考模型中进行学习训练,以得到训练后的所述参考模型的优化参数,且根据所述优化参数生成检测模型,然后基于所述检测模型对待测音频进行音频节拍检测,以获取所述待测音频的BPM值以及所述BPM值对应的置信度,提升了音频节拍检测的准确率,且能缩短检测过程中的运行时间。

Подробнее
30-08-2022 дата публикации

Automatically managing the musical tastes and preferences of a population of users requesting digital pieces of music automatically composed and generated by an automated music composition and generation system

Номер: US11430419B2
Автор: Andrew H. Silverstein
Принадлежит: Shutterstock Inc

An automated music composition and generation system having an automated music composition and generation engine for processing musical experience descriptors and space parameters selected by the system user. The engine includes: a user taste generation subsystem for automatically determining the musical tastes and preferences of each system user based on user feedback and autonomous piece analysis, and maintaining a system user profile reflecting musical tastes and preferences of each system user; and a population taste aggregation subsystem for aggregating the musical tastes and preferences of the population of system users, and modifying the musical experience descriptors and/or time and/or space parameters provided to the automated music composition and generation engine, so that the digital pieces of composed music better reflect the musical tastes and preferences of the population of system users and meet future system user requests for automated music compositions.

Подробнее
21-09-2022 дата публикации

Create music content

Номер: KR20220128672A
Принадлежит: 에이미 인코퍼레이티드

본 발명은 오디오 파일들의 이미지 표현들에 기초하여 새로운 음악 컨텐츠를 자동으로 생성하는 기법에 관한 것이다. 본 발명은 또한 실시간 오디오 생성을 위한 오디오 기술의 실행에 관한 것이다. 본 발명은 또한 음악 컨텐츠를 수정하기 위한 사용자 생성 제어의 실행에 관한 것이다. 본 발명은 또한 작곡된 음악 컨텐츠에 대한 기여를 트래킹하는 것에 관한 것이다.

Подробнее
23-11-2021 дата публикации

Musical composition file generation and management system

Номер: US11183160B1
Принадлежит: Wonder Inventions LLC

A system and method to identify a digital representation of a first musical composition including a set of musical blocks. A set of parameters associated with source content are identified. In accordance with one or more rules, one or more of the set of musical blocks of the first musical composition are modified based on the set of parameters to generate a derivative musical composition. An audio file including the derivative musical composition is generated.

Подробнее
01-01-2020 дата публикации

Electronic musical instrument, electronic musical instrument control method, and storage medium

Номер: EP3588484A1
Принадлежит: Casio Computer Co Ltd

An electronic musical instrument includes: a memory (202) that stores a trained acoustic model (306) obtained by performing machine learning (305) on training musical score data (311) and training singing voice data of a singer; and at least one processor (205), wherein the at least one processor (205): in accordance with a user operation on an operation element in a plurality of operation elements (101), inputs prescribed lyric data (215a) and pitch data (215b) corresponding to the user operation of the operation element to the trained acoustic model (306) so as to cause the trained acoustic model (306) to output the acoustic feature data (317) in response to the inputted prescribed lyric data (215a) and the inputted pitch data (215b), and digitally synthesizes and outputs inferred singing voice data (217) that infers a singing voice of the singer on the basis of the acoustic feature data (317) output by the trained acoustic model (306).

Подробнее
04-04-2023 дата публикации

Arrangement generation method, arrangement generation device, and generation program

Номер: JP7251684B2
Автор: 正博 鈴木
Принадлежит: Yamaha Corp

Подробнее