Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 5253. Отображено 100.
12-01-2012 дата публикации

Apparatus and method for playing musical instrument using augmented reality technique in mobile terminal

Номер: US20120007884A1
Автор: Ki-Yeung Kim
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An apparatus and a method related to an application of a mobile terminal using an augmented reality technique capture an image of a musical instrument directly drawn/sketched by a user to recognize the particular relevant musical instrument, and provide an effect of playing the musical instrument on the recognized image as if a real instrument were being played. The apparatus preferably includes an image recognizer and a sound source processor. The image recognizer recognizes a musical instrument on an image through a camera. The sound source processor outputs the recognized musical instrument on the image on a display unit to use the same for a play, and matches the musical instrument play on the image to a musical instrument play output on the display unit.

Подробнее
19-01-2012 дата публикации

Method and device for audio signal classification

Номер: US20120016677A1
Принадлежит: Huawei Technologies Co Ltd

The present invention discloses a method and a device for audio signal classification, and relates to the field of communications technologies, which solve a problem of high complexity of type classification of audio signals in the prior art. In the present invention, after an audio signal to be classified is received, a tonal characteristic parameter of the audio signal to be classified, where the tonal characteristic parameter of the audio signal to be classified is in at least one sub-band, is obtained, and a type of the audio signal to be classified is determined according to the obtained characteristic parameter. The present invention is mainly applied to an audio signal classification scenario, and implements audio signal classification through a relatively simple method.

Подробнее
15-03-2012 дата публикации

Device and method for rhythm training

Номер: US20120060666A1
Автор: Andy Shoniker
Принадлежит: Individual

A programmable rhythm trainer configured to operate on a general purpose computing device including a handheld computing device or a mobile communication device. According to an embodiment, the programmable rhythm trainer comprises a component configured to generate a mix or chain comprising one or more bars and each the bars comprising one or more note arrangements. According to an embodiment, the programmable rhythm trainer comprises a component configured to rearrange one or more of the bars in the chain and save the rearranged chain to memory. According to an embodiment, the programmable rhythm trainer comprises a component configured to rearrange one or more of the note arrangements belonging to one of the bars. According to an embodiment, the programmable rhythm trainer comprises a component configured to set a beats-per-minute for one or more the chains in response to a user input. According to an embodiment, the programmable rhythm trainer comprises a graphical user interface and input for manipulating one or more of the notes, the note arrangements, the bars and/or the chains or mixes. According to an embodiment, the programmable rhythm trainer comprises an application or software program configured to run on computing device. According to another embodiment, the programmable rhythm trainer comprises a portable or handheld device.

Подробнее
15-03-2012 дата публикации

Device and method for interpreting musical gestures

Номер: US20120062718A1
Автор: Dominique David

Musical rendition is provided through the use of microsensors, in particular of accelerometers and magnetometers or rate gyros, and through an appropriate processing of the signals from the microsensors. In particular, the processing uses a merging of the data output from the microsensors to eliminate false alarms in the form of movements of the user unrelated to the music. The velocity of the musical strikes is also measured. Embodiments make it possible to control the running of mp3 or wav type music files to be played back.

Подробнее
22-03-2012 дата публикации

Polyphonic tuner

Номер: US20120067192A1
Принадлежит: TC Group AS

The present invention relates to a musical instrument tuner, e.g. a guitar tuner, featuring different levels of detail for displaying monophonic and polyphonic characteristics of an input signal.

Подробнее
29-03-2012 дата публикации

Sound processing device, sound processing method, information storage medium, and program

Номер: US20120077592A1
Автор: Yuichi Asami
Принадлежит: Individual

Sound output units ( 201 A, 201 B) output sounds, respectively. A detection unit ( 202 ) detects existence/absence of depression performed by a player on each of a plurality of operation targets to be operated. A sound volume changing unit ( 203 ) changes the volume ratio of a sound to be output from each of the sound output units ( 201 A, 201 B) based on an operation target whose depression has been detected among the plurality of operation targets. For example, the sound volume changing unit ( 203 ) relatively increases the volume of a sound to be output from a sound output unit among the sound output units ( 201 A, 201 B) which is far from the operation target whose depression has been detected.

Подробнее
28-06-2012 дата публикации

Intervalgram Representation of Audio for Melody Recognition

Номер: US20120160078A1
Принадлежит: David Ross, Lyon Richard F, Walters Thomas C

A system, method, and computer readable storage medium generates an audio fingerprint for an input audio clip that is robust to differences in key, instrumentation, and other performance variations. The audio fingerprint comprises a sequence of intervalgrams that represent a melody in an audio clip according pitch intervals between different time points in the audio clip. The fingerprint for an input audio clip can be compared to a set of reference fingerprints in a reference database to determine a matching reference audio clip.

Подробнее
28-06-2012 дата публикации

Interactive guitar game designed for learning to play the guitar

Номер: US20120165087A1
Принадлежит: Individual

An interactive game designed for learning to play a guitar. A guitar may be connected to a computer or other platform, capable of loading music and displaying notes and chords and other feedback and visual learning aids on a display screen, allowing a user to read music and play along. The goal of the software or interactive game engine is for players to learn how to play a guitar. Users may operate the game in a number of modes with different goals, playing mini-games throughout the levels of the game. The game provides feedback and statistics to help users learn how to play the guitar.

Подробнее
23-08-2012 дата публикации

Complexity Scalable Perceptual Tempo Estimation

Номер: US20120215546A1
Принадлежит: DOLBY INTERNATIONAL AB

The present document relates to methods and systems for estimating the tempo of a media signal, such as audio or combined video/audio signal. In particular, the document relates to the estimation of tempo perceived by human listeners, as well as to methods and systems for tempo estimation at scalable computational complexity. A method and system for extracting tempo information of an audio signal from an encoded bit-stream of the audio signal comprising spectral band replication data is described. The method comprises the steps of determining a payload quantity associated with the amount of spectral band replication data comprised in the encoded bit-stream for a time interval of the audio signal; repeating the determining step for successive time intervals of the encoded bit-stream of the audio signal, thereby determining a sequence of payload quantities; identifying a periodicity in the sequence of payload quantities; and extracting tempo information of the audio signal from the identified periodicity.

Подробнее
20-09-2012 дата публикации

Music and light synchronization system

Номер: US20120234160A1

An apparatus for synchronizing light signals to a music input signal includes an analog to digital converter to generate a digital signal equivalent of the analog music input signal, an digital signal decoder to generate an output pulse width modulated signal that is representative of the tempo and the volume of the music input signal, a light driver unit that receives the output pulse width modulated signal to correspondingly light up a lighting unit, and a lighting unit that emits light. Thus, light brightness synchronizes with the music volume amplitude, and the light blinks on and off with the music tempo and beating.

Подробнее
25-10-2012 дата публикации

Music game improvements

Номер: US20120266738A1
Принадлежит: STARPLAYIT PTY LTD

The present invention provides a method of processing musical information. A user can perform a musical piece, using a real musical instrument (e.g. a guitar), which is received as audio input. The audio input is then assessed to determine whether it meets various quality standards—for example, whether the user played at the right pitch, at the right time, or at the right volume. If the audio input meets the standards, audio output is then provided of a professional performance of the musical piece, such that it sounds as if the user is performing the professional audio. The standards can be adjusted to different levels, depending on the user's skill on the musical instrument. At an easy level, low standards may be applied, such that even unskilled or beginner musicians can sound like a professional. For more skilled users, the standards may be more difficult to meet. If the user does not meet the quality standards, alternative audio output may be provided, indicating incorrect performance of the musical piece.

Подробнее
31-01-2013 дата публикации

System and Method for Producing a More Harmonious Musical Accompaniment

Номер: US20130025437A1
Принадлежит: Music Mastermind Inc

A system and process for producing a more harmonious musical accompaniment for a musical compilation, the process comprising determining a plurality of probable key signatures for the musical compilation, creating an interval profiling matrix for each of the probable key signatures, finding products of a major key interval profile matrix with each of the interval profiling matrices, summing each of the major key interval products into a running major key sum, finding a product of a minor key interval profile with each of the interval profiling matrices, summing each of the minor key interval products into a running minor key sum, and selecting the most probable key signature from the plurality of probable key signatures by comparing the minor key sum and the major key sum.

Подробнее
25-04-2013 дата публикации

Method and Apparatus for Audio Signal Classification

Номер: US20130103398A1
Принадлежит: Nokia Oyj

An apparatus comprising at least one processor and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform determining a signal identification value for an audio signal, determining at least one noise level value for the audio signal, comparing the signal identification value against a signal identification threshold and each of the at least one noise level value against an associated noise level threshold, and identifying the audio signal dependent on the comparison.

Подробнее
30-05-2013 дата публикации

Character-based automated shot summarization

Номер: US20130138435A1
Автор: Frank Elmo Weber
Принадлежит: Individual

Methods, devices, systems and tools are presented that allow the summarization of text, audio, and audiovisual presentations, such as movies, into less lengthy forms. High-content media files are shortened in a manner that preserves important details, by splitting the files into segments, rating the segments, and reassembling preferred segments into a final abridged piece. Summarization of media can be customized by user selection of criteria, and opens new possibilities for delivering entertainment, news, and information in the form of dense, information-rich content that can be viewed by means of broadcast or cable distribution, “on-demand” distribution, internet and cell phone digital video streaming, or can be downloaded onto an iPod™ and other portable video playback devices.

Подробнее
06-06-2013 дата публикации

Musical fingerprinting

Номер: US20130139674A1
Принадлежит: Echo Nest Corp

A method for fingerprinting an unknown music sample is disclosed. A plurality of known tracks may be segmented into reference samples. A reference fingerprint including a plurality of codes may be generated for each reference sample. An inverted index including, for each possible code value, a list of reference samples having reference fingerprints that contain the respective code value may be generated. An unknown fingerprint including a plurality of codes may be generated from the unknown music sample. A code match histogram may list candidate reference samples and associated scores, each score indicating a number of codes from the unknown fingerprint that match codes in the reference fingerprint. Time difference histograms may be generated for two or more reference samples having the highest scores. A determination may be made whether or not a single reference sample matches the unknown music sample based on a comparison of the time difference histograms.

Подробнее
22-08-2013 дата публикации

Apparatus and method for modifying an audio signal using envelope shaping

Номер: US20130216053A1
Автор: Sascha Disch

An apparatus for modifying an audio signal has an envelope shape determiner, a filterbank processor, a signal processor, a combiner and an envelope shaper. The envelope shape determiner determines envelope shape coefficients based on the a frequency domain audio signal representing a time domain input audio signal and the filterbank processor generates a plurality of bandpass signals in a subband domain based on the frequency domain audio signal. Further the signal processor modifies a subband domain bandpass signal of the plurality of subband domain bandpass signals based on a predefined modification target. The combiner combines at least a subset of the plurality of subband domain bandpass signals containing the modified subband domain bandpass signal to obtain a time domain audio signal. Further, the envelope shaper is operative to obtain a shaped audio signal.

Подробнее
22-08-2013 дата публикации

Musical system for promoting brain health

Номер: US20130217495A1

A musical brain fitness system according to an exemplary embodiment of the present invention includes: a display device configured to display a visual signal according to selected music; a control box configured to control the display device by receiving input of a user; and a controller including an acceleration sensor, and configured to drive the acceleration sensor according to a movement of the user to transmit a signal to the control box, in which the control box compares the signal received from the controller with a signal controlling the display device, processes the comparison, and outputs a result of the comparison.

Подробнее
03-10-2013 дата публикации

Tonal component detection method, tonal component detection apparatus, and program

Номер: US20130255473A1
Принадлежит: Sony Corp

There is provided a tonal component detection method including performing a time-frequency transformation on an input time signal to obtain a time-frequency distribution, detecting a peak in a frequency direction at a time frame of the time-frequency distribution, fitting a tone model in a neighboring region of the detected peak, and obtaining a score indicating tonal component likeness of the detected peak based on a result obtained by the fitting.

Подробнее
21-11-2013 дата публикации

Music Analysis Apparatus

Номер: US20130305904A1
Автор: Kouhei SUMI
Принадлежит: Yamaha Corp

A music analysis apparatus calculates a similarity index based on an edit distance between a designated sequence of notes designated by a user and a reference sequence of notes of a reference piece of music. The edit distance is calculated by setting a substitution cost between a first note in the designated sequence of notes and a second note in the reference sequence of notes to a first value when one of the first note and the second note corresponds to any of a plurality of notes contained in a tolerance interval containing the other of the first note and the second note, and by setting the substitution cost to a second value different from the first value when one of the first note and the second note does not correspond to any of the plurality of notes contained in the tolerance interval.

Подробнее
05-12-2013 дата публикации

Apparatus and method for high speed visualization of audio stream in an electronic device

Номер: US20130325154A1
Принадлежит: SAMSUNG ELECTRONICS CO LTD

An electronic device and a method for high speed visualization of an audio stream are provided. The method of operating the electronic device includes extracting only header information from at least one frame included in a specific audio file, extracting a global gain value, which is an average volume of respective frames, by using the extracted frame header information, filtering the extracted global gain value, and displaying the filtered value.

Подробнее
19-12-2013 дата публикации

Systems, methods, apparatus, and computer-readable media for pitch trajectory analysis

Номер: US20130339011A1
Принадлежит: Qualcomm Inc

Systems, methods, and apparatus for pitch trajectory analysis are described. Such techniques may be used to remove vocals and/or vibrato from an audio mixture signal. For example, such a technique may be used to pre-process the signal before an operation to decompose the mixture signal into individual instrument components.

Подробнее
23-01-2014 дата публикации

Note Sequence Analysis Apparatus

Номер: US20140020546A1
Автор: Kouhei SUMI
Принадлежит: Yamaha Corp

A note sequence analysis apparatus calculates a similarity index based on similarity between a designated sequence of notes designated by a user and a reference sequence of notes for each of a plurality of reference pieces of music, then selects a reference piece of music from among the plurality of reference pieces of music based on the similarity index calculated for each of the plurality of reference pieces of music, and specifies an evaluation index of the designated sequence of notes based on the similarity index calculated for the selected reference piece of music.

Подробнее
27-02-2014 дата публикации

System and method for conforming an audio input to a musical key

Номер: US20140053710A1
Принадлежит: Music Mastermind Inc

A system and method for conforming an audio input to a musical key. Audio input is received and the musical key is determined. Sequentially for each consecutive note, a pitch value for each note and an interval between each preceding and subsequent note is determined; alternative subsequent notes based on the musical key and the pitch value of the subsequent note are determined; each interval between each alternative subsequent note and each respective note of the alternative subsequent notes corresponding to the preceding note is scored based on the interval between the preceding note and the subsequent note of the audio input; and a best interval is selected for each alternative subsequent note. A best-match note is selected for each note of the audio input based on the best intervals of all the notes, and each audio input note is conformed to a frequency of the best-match note.

Подробнее
06-03-2014 дата публикации

Performance information processing apparatus, performance information processing method, and program recording medium for determining tempo and meter based on performance given by performer

Номер: US20140060287A1
Автор: Hiroko Okuda
Принадлежит: Casio Computer Co Ltd

A performance-information processing apparatus processes performance information entered thereto. When the performance information is entered in a time interval between (i) a starting time point, at which performance information of one note starts entering and (ii) a first timing, that is a certain time after another performance information of a note has been entered after the performance information of said one note was entered, a tempo determining unit determines a tempo of the performance information, based on the performance information entered in the time interval, and a meter determining unit which determines a meter of the performance information based on the tempo determined by the tempo determining unit.

Подробнее
27-03-2014 дата публикации

Gaming system and method configured to provide a musical game associated with unlockable musical instruments

Номер: US20140087878A1
Принадлежит: INTERNATIONAL GAME TECHNOLOGY

Various embodiments of the present disclosure are directed to gaming systems and methods configured to provide a musical game associated with unlockable musical instruments. In one embodiment, the musical game is associated with a plurality of different musical instruments that are initially unlocked or locked and a plurality of different instrument playing events, each of which is associated with one or more of the musical instruments. Upon an occurrence of one of the instrument playing events, the gaming system produces at least one sound and provides an award associated with each unlocked musical instrument associated with that instrument playing event. The gaming system does not produce any sounds or provide any awards associated with any locked musical instruments associated with that instrument playing event. Upon an occurrence of an instrument unlock event, if any of the musical instruments are locked, the gaming system unlocks at least one locked musical instrument.

Подробнее
04-01-2018 дата публикации

INFORMATION RECEPTION SYSTEM, RECORDING MEDIUM, AND INFORMATION INPUT METHOD

Номер: US20180004354A1
Принадлежит:

An information reception device is provided with: operation surface which is adjusted to produce a characteristic index vibration by an object contact; a storage device stores candidate information (candidate information is related with the index vibration) which serves as a candidate of input information; a microphone which acquires observation information according to observation of the actual vibration arising in the surrounding environment; and a CPU. The CPU (selecting part) judges whether or not the index vibration exists in the observation information acquired. When the CPU judges that the index vibration exists, the CPU selects the candidate information which is related with the index vibration as the input information. 1. An information reception system receiving input information according to a user's operation , comprising:an operation surface adjusted to produce a characteristic index vibration by an object contact;a storage configured to store candidate information which serves as a candidate of the input information, the candidate information is related with the index vibration;an observation sensor configured to acquire observation information according to observation of actual vibration arising in surrounding environment; and judge whether or not the index vibration exists in the observation information acquired, and', 'select, when the circuitry judges that the index vibration exists, the candidate information related with the index vibration as the input information., 'circuitry configured to2. The information reception system according to claim 1 , whereinthe index vibration is a vibration arising when the object is contacting and moving on the operation surface.3. The information reception system according to claim 1 , whereinthe candidate information including a plurality of the candidate information related with each other different supplementary information, acquire, when the circuitry judges that the index vibration exists, the supplementary ...

Подробнее
07-01-2016 дата публикации

AUDIO SIGNAL ANALYSIS

Номер: US20160005387A1
Автор: Eronen Antti Johannes
Принадлежит: NOKIA TECHNOLOGIES OY

A server system is provided for receiving video clips having an associated audio/musical track for processing at the server system. The system comprises a first beat tracking module for generating a first beat time sequence from the audio signal using an estimation of the signal's tempo and chroma accent information. A ceiling and floor function is applied to the tempo estimation to provide integer versions which are subsequently applied separately to a further accent signal derived from a lower-frequency sub-band of the audio signal to generate second and third beat time sequences. A selection module then compares each of the beat time sequences with the further accent signal to identify a best match. 165-. (canceled)66. Apparatus , the apparatus having at least one processor and at least one memory having computer-readable code stored thereon which when executed controls the at least one processor:{'sub': '1', 'to generate a first accent signal (a) representing musical accents in an audio signal;'}{'sub': '2', 'to generate a second, different, accent signal (a) representing musical accents in the audio signal;'}{'sub': '1', 'to estimate a first beat time sequence (b) from the first accent signal;'}{'sub': '2', 'to estimate a second beat time sequence (b) from the second accent signal; and'}{'sub': 1', '2, 'to identify which one of the first and second beat time sequences (b) (b) corresponds most closely with peaks in one or both of the accent signal(s).'}67. Apparatus according to claim 66 , wherein the computer-readable code when executed controls the at least one processor to generate the first accent signal (a) by extracting chroma accent features based on fundamental frequency (f) salience analysis.68. Apparatus according to claim 66 , wherein the computer-readable code when executed controls the at least one processor to generate using the first accent signal (a) the estimated tempo (BPM) of the audio signal.69. Apparatus according to claim 68 , wherein the ...

Подробнее
07-01-2021 дата публикации

MUSICAL PERFORMANCE ANALYSIS METHOD AND MUSICAL PERFORMANCE ANALYSIS APPARATUS

Номер: US20210005173A1
Автор: Li Bochen, MAEZAWA Akira
Принадлежит:

An apparatus is provided that accurately estimates a point at which a musical performance is started by a player. The apparatus includes the musical performance analysis unit , and the musical performance analysis unit obtains action data that includes a time series of feature data representing actions made by a player during a musical performance for a reference period and estimating a sound-production point based on the action data at an estimated point using a learned model L. 1. A musical performance analysis method realized by a computer , the method comprising:obtaining action data that includes a time series of feature data representing actions made by a player during a musical performance for a reference period; andestimating a sound-production point based on the action data at an estimated point using a learned model.2. The musical performance analysis method according to claim 1 , wherein the estimating of the sound-production point includes:calculating a probability that the estimated point, which follows the reference period, is the sound-production point, using the learned model; andestimating the sound-production point based on the probability using the learned model.3. The musical performance analysis method according to claim 2 , wherein:the reference period include a plurality of analysis points, andthe calculating of the probability sequentially calculates, for the plurality of analysis points on a time axis, the probability that the estimated point, which follows the plurality of analysis points, is the sound-production point.4. The musical performance analysis method according to claim 2 , wherein the estimating of the sound-production point estimates the sound-production point based on a distribution of a plurality of the probabilities corresponding to the plurality of respective analysis points.5. The musical performance analysis method according to claim 1 , further comprising:generating the time series of feature data based on image data ...

Подробнее
04-01-2018 дата публикации

Intelligent Crossfade With Separated Instrument Tracks

Номер: US20180005614A1
Принадлежит:

A method is provided including separating a first file into a first plurality of instrument tracks and a second file into a second plurality of instrument tracks, wherein each instrument track of each of the first plurality and second plurality corresponds to a type of instrument; selecting a first instrument track from the first plurality of instrument tracks and a second instrument track from the second plurality of instrument tracks based at least on the type of instrument corresponding to the first instrument track and the second instrument track; fading out other instrument tracks from the first plurality of instrument tracks; performing a crossfade between the first instrument track and the second instrument track; and fading in other instrument tracks from the second plurality of instrument tracks. 1. A method comprising:separating a first file into a first plurality of instrument tracks and a second file into a second plurality of instrument tracks, wherein each instrument track of each of the first plurality and second plurality corresponds to a type of instrument;selecting a first instrument track from the first plurality of instrument tracks and a second instrument track from the second plurality of instrument tracks based at least on a similarity of the first instrument track and the second instrument track;fading out other instrument tracks from the first plurality of instrument tracks;performing a crossfade between the first instrument track and the second instrument track; andfading in other instrument tracks from the second plurality of instrument tracks.2. The method of claim 1 , further comprising:determining a dominant instrument in the first plurality of instrument tracks and a corresponding instrument track in the second plurality of instrument tracks comprising the dominant instrument, wherein the selecting comprises: selecting the dominant instrument track as the first instrument track and the corresponding instrument track as the second ...

Подробнее
04-01-2018 дата публикации

BI-DIRECTIONAL MUSIC SYNCHRONIZATION USING HAPTIC DEVICES

Номер: US20180005616A1
Принадлежит: Intel Corporation

Systems and methods may provide for capturing one or more inbound wireless transmissions and identifying a remote user movement based on at least one of the one or more inbound wireless transmissions. Additionally, a local haptic output may be generated, by an actuator, based on the remote user movement. In one example, the actuator is a piezoelectric actuator. 1. A mobile device comprising:a receiver to capture one or more inbound wireless transmissions;a haptic controller communicatively coupled to the receiver, the haptic controller to identify a remote user movement based on at least one of the one or more inbound wireless transmissions;a piezoelectric actuator communicatively coupled to the haptic controller, the piezoelectric actuator to generate a local haptic output based on the remote user movement, wherein a pulse timing of the local haptic output is to indicate a tempo of the remote user movement, an intensity of the local haptic output is to indicate an intensity of the remote user movement, a waveform shape associated with the local haptic output is to indicate a tone of the remote user movement, and a frequency modulation associated with the local haptic output is to indicate a pitch associated with the remote user movement;a motion sensor to detect local user movement; anda transmitter communicatively coupled to the motion sensor, the transmitter to generate one or more outbound wireless transmissions based on the local user movement.2. The mobile device of claim 1 , further comprising a housing that includes a handheld form factor.3. The mobile device of claim 2 , wherein the handheld form factor is selected from a group consisting of a baton form factor claim 2 , a microphone form factor and a microphone stand form factor.4. The mobile device of claim 1 , further comprising a housing that includes a wearable form factor claim 1 , wherein the local haptic output is to be generated via the housing.5. A mobile device comprising:a receiver to capture ...

Подробнее
04-01-2018 дата публикации

Information processing method, terminal device and computer storage medium

Номер: US20180005618A1
Автор: LIU Pei
Принадлежит:

A method for processing information, terminal device and a computer storage medium are disclosed. The method for processing information includes that: a first control instruction is acquired, and a first application is switched to a preset mode according to the first control instruction; a first triggering operation is acquired based on the preset mode, at least two pieces of multimedia data are selected based on the first triggering operation, and a first playing interface is generated; when a second control instruction is acquired, the at least two pieces of multimedia data in the first playing interface are sequentially played; in a process of playing first multimedia data in the at least two pieces of multimedia data, first audio data is acquired; and the first multimedia data and the first audio data are synthesized as second multimedia data. 1. A method for processing information , comprising:acquiring a first control instruction, and switching a first application to a preset mode according to the first control instruction;acquiring a first triggering operation based on the preset mode, selecting at least two pieces of multimedia data based on the first triggering operation, and generating a first playing interface;when a second control instruction is acquired, sequentially playing the at least two pieces of multimedia data in the first playing interface;in a process of playing first multimedia data in the at least two pieces of multimedia data, acquiring first audio data, the first multimedia data being any multimedia data in the at least two pieces of multimedia data; andsynthesizing the first multimedia data and the first audio data as second multimedia data.2. The method according to claim 1 , wherein acquiring the first control instruction comprises: acquiring a second triggering operation claim 1 , and generating the first control instruction based on the second triggering operation claim 1 , wherein the second triggering operation is for a preset region ...

Подробнее
02-01-2020 дата публикации

MUSIC PRACTICE FEEDBACK SYSTEM, METHOD, AND RECORDING MEDIUM

Номер: US20200005664A1
Принадлежит:

A music practice feedback system, includes a processor; and a memory storing instructions that cause the processor to perform, forecasting an ability to retain playing skills of a sheet music by a user via machine learning and changing a display of the sheet music based on the forecasted ability. 1. A music practice feedback system , comprising:a processor; and forecasting an ability to retain playing skills of a sheet music by a user via machine learning; and', 'changing a display of the sheet music based on the forecasted ability., 'a memory storing instructions that cause the processor to perform2. A non-transitory computer-readable recording medium recording a music practice feedback program , the program causing a computer to perform:forecasting an ability to retain playing skills of a sheet music by a user via machine learning; andchanging a display of the sheet music based on the forecasted ability3. A music practice feedback method , comprising:forecasting an ability to retain playing skills of a sheet music by a user via machine learning; andchanging a display of the sheet music based on the forecasted ability. The present application is a Continuation Application of U.S. patent application Ser. No. 15/791,628, filed on Oct. 24, 2017 which is a Continuation Application of U.S. patent application Ser. No. 15/441,916, now U.S. Pat. No. 9,842,510, issued on Dec. 12, 2017, which is a Continuation Application of U.S. patent application Ser. No. 14/985,160, now U.S. Pat. No. 9,672,799, issued on Jun. 6, 2017, the entire contents of which are hereby incorporated by reference.The present invention relates generally to a music practice feedback system, and more particularly, but not by way of limitation, to a music practice feedback system for changing a visualization of sheet music based on collecting of information related to the playing of regions of the sheet music.Conventional techniques for visualization of music and other sounds use note extraction. The ...

Подробнее
03-01-2019 дата публикации

Musical Score Generator

Номер: US20190005929A1
Принадлежит: KYOCERA Document Solutions Inc.

A method of generating a musical score file for one or more target musical instruments with a score generation component based on input audio data. The score generation component finds candidate musical notes within the input audio data using a frequency analysis to identify segments that share substantially the same audio frequency, and finds a best match for those candidate musical notes in audio data associated with target musical instruments in a sound database. A generated musical score file can be printed as sheet music or audibly played back over speakers. 1. A musical score generator device , comprising:a target instrument parameter for a target musical instrument based on a set of instructions to:receive an input audio data and a selection of one or more target musical instruments at a score generator component;identify candidate musical notes within the input audio data by performing a frequency analysis on the input audio data in an input audio interpretation parameter and identify segments of the input audio data that share substantially the same audio frequency within the input audio interpretation parameter;generate a musical score file with a page description header that identifies print settings and a musical instrument information section that identifies one or more target musical instruments; anda display component that displays the generated musical score file for the target musical instrument on a display screen.2. The musical score generator device of claim 1 , further comprising instructions to perform frequency analysis on the input audio data and identify notes within frequencies present in the input audio data.3. The musical score generator device of claim 1 , further comprising instructions to identify candidate musical note information for the one or more target musical instruments from a sound database by performing frequency analysis on the input audio data and identify musical notes in the target musical instrument that share ...

Подробнее
03-01-2019 дата публикации

Electronic wind instrument, method of controlling the electronic wind instrument, and computer readable recording medium with a program for controlling the electronic wind instrument

Номер: US20190005932A1
Автор: Yuji Tabata
Принадлежит: Casio Computer Co Ltd

An electronic wind instrument is provided, which is capable of representing a wide range of performances using a tonguing operation. The electronic wind instrument has at least one sensor, a sound source for generating a tone, and a controller. The controller controls a tonguing performance detecting process for detecting a tonguing performance played by the player based on the output value from the one sensor, and a tone muting process for muting the tone output from the speaker in accordance with the lip position of the player determined in the lip position determining process, while the tonguing performance is being detected in the tonguing performance detecting process.

Подробнее
03-01-2019 дата публикации

System and Method for improving singing voice separation from monaural music recordings

Номер: US20190005934A1
Автор: Deif Hatem Mohamed
Принадлежит:

There is provided a post processing technique or method for separation algorithms to separate vocals from monaural music recordings. The method comprises detecting traces of pitched instruments in a magnitude spectrum of a separated voice using Hough transform and removing the detected traces of pitched instruments using median filtering to improve the quality of the separated voice and to form a new separated music signal. The method further comprises applying adaptive median filtering techniques to remove the identified Hough regions from the vocal spectrogram producing separated pitched instruments harmonics and new vocals while adding the separated pitched instruments harmonics to a music signal separated using any separation algorithm to form the new separated music signal. 1. A method for improving singing voice separation from monaural music recordings , the method comprising:detecting traces of pitched instruments in a magnitude spectrum of a separated voice using Hough transform; andremoving the detected traces of pitched instruments using adaptive median filtering to improve a quality of the separated voice and to form a new separated music signal.2. The method of improving singing voice separation of claim 1 , wherein the method further comprises:generating the magnitude spectrogram of a mixture signal;converting the magnitude spectrogram to a grey scale image;applying a plurality of binarization steps to the grey scale image to generate a final binary image;applying Hough transform to the final binary image;identifying horizontal ridges represented by Hough lines and calculating variable frequency bands of the identified horizontal ridges;calculating rectangular regions denoted here as Hough regions;generating a vocal spectrogram from vocal signals separated using any reference separation algorithm;applying adaptive median filtering techniques to remove the identified Hough regions from the vocal spectrogram producing separated pitched instruments ...

Подробнее
03-01-2019 дата публикации

SOUND SIGNAL PROCESSING METHOD AND SOUND SIGNAL PROCESSING APPARATUS

Номер: US20190005935A1
Автор: SASAI DAN
Принадлежит:

A sound signal processing method according to an embodiment includes a step of acquiring an input sound signal, a step of acquiring a beat number per unit time period from the input sound signal, a step of normalizing the input sound signal with the beat number per unit time period, a step of calculating a beat spectrum of the normalized input sound signal, and a step of calculating a rhythm similarity between the beat spectrum of the normalized input sound signal and a normalized beat spectrum calculated from a reference sound signal. 1. A sound signal processing method comprising:acquiring a beat number per unit time period from an input sound signal;executing a normalization process for normalizing the input sound signal with the beat number per unit time period;calculating a rhythm similarity between a beat spectrum of the normalized input sound signal and a normalized beat spectrum calculated from a reference sound signal.2. The sound signal processing method according to claim 1 , further comprising:calculating a second similarity between the input sound signal and the reference sound signal using nonnegative matrix factorization; andintegrating the rhythm similarity and the second similarity.3. The sound signal processing method according to claim 1 , further comprising:calculating an amplitude spectrogram of the input sound signal; andcalculating a spectral difference that is a difference in amplitude between adjacent frames on a time axis from the amplitude spectrogram, whereinin the normalization process, the time axis of the spectral difference is normalized with a beat number per unit time period.4. The sound signal processing method according to claim 3 , whereinin the normalization process, the time axis of the spectral difference is divided by n times the beat number per unit time period to normalize the time axis into 1/n beat units.5. The sound signal processing method according to claim 3 , whereinat the calculating a beat spectrum of the ...

Подробнее
08-01-2015 дата публикации

DETECTING BEAT INFORMATION USING A DIVERSE SET OF CORRELATIONS

Номер: US20150007708A1
Принадлежит: MICROSOFT CORPORATION

A beat analysis module is described for determining beat information associated with an audio item. The beat analysis module uses an Expectation-Maximization (EM) approach to determine an average beat period, where correlation is performed over diverse representations of the audio item. The beat analysis module can determine the beat information in a relative short period of time. As such, the beat analysis module can perform its analysis together with another application task (such as a game application task) without disrupting the real time performance of that application task. In one application, a user may select his or her own audio items to be used in conjunction with the application task. 1. A beat analysis module for analyzing an audio item , comprising:an average beat period determination module for determining an average beat period of an audio item, the average beat period determination module being configured to use an Expectation-Maximization (EM) approach to determine the average beat period, wherein correlation is performed over plural representations of the audio item; anda beat onset determination module for determining, on a basis of the average beat period, onset information associated within the commencement of beats within the audio item.2. The beat analysis module of claim 1 , wherein the beat analysis module is used within an application module claim 1 , wherein the beat analysis module is configured to determine the average beat period and the onset information substantially concurrently with respect to another task performed by the application module.3. The beat analysis module of claim 2 , wherein the application module is a game module claim 2 , wherein the beat analysis module is configured to determine the average beat period and the onset information substantially concurrently with respect to a game playing operation performed by the game module.420-. (canceled)21. The beat analysis module of claim 1 , wherein the correlation is ...

Подробнее
20-01-2022 дата публикации

Method and device for audio generation

Номер: US20220016527A1
Автор: Jian Shi, Jiayun Li, Xu DAI

The present disclosure relates to a method and device for audio generation. The method includes: obtaining a target rhythm, a target verse melody and a target chorus melody; configuring the target rhythm as a first audio track, the target verse melody as a second audio track, and the target chorus melody as a third audio track; generating a target audio by aligning start playing time of the first audio track, the second audio track and the third audio track to beat occurrence time of a first beat, a second beat and a third beat in a first metronome data respectively.

Подробнее
20-01-2022 дата публикации

VIDEO CONTROL DEVICE AND VIDEO CONTROL METHOD

Номер: US20220020348A1
Принадлежит: ROLAND CORPORATION

This video control device includes: a detection unit that detects a beat timing of audio; and a control unit that updates a display mode of a video on the basis of the beat timing and change information indicating a change in a display mode of a video displayed on a display device. 1. A video control device comprising:a detection unit configured to detect a beat timing of an audio; anda control unit configured to change a display mode of a video displayed on a display device based on the beat timing and change information indicating change content of the display mode.2. (canceled)3. The video control device according to claim 1 , wherein claim 1 , the switching from the video to another video is performed as the change in the display mode of the video claim 1 ,the video includes a first video to which the audio is added and a second video different from the first video, andthe control unit performs a parallel playback process of the first and second videos and performs repeated playback of one of the first and second videos which ends during playback of the other one of the first and second videos.4. The video control device according to claim 1 , wherein claim 1 , the switching from the video to another video is performed as the change in the display mode of the video claim 1 ,the video includes a first video and a second video different from the first video, andthe control unit performs a parallel playback process of the audio, the first video, and the second video and performs repeated playback of the first video or the second video ended during playback of the audio.5. The video control device according to claim 1 , wherein the control unit changes a parameter of the video related to the effect and performs the addition of the effect to the video as the change in the display mode.6. The video control device according to claim 5 , wherein the control unit changes intensity of the parameter in accordance with a waveform of a temporally changing predetermined ...

Подробнее
12-01-2017 дата публикации

Interactive Performance Direction for a Simultaneous Multi-Tone Instrument

Номер: US20170011722A1
Принадлежит:

A musical instrument performance solution is described. Labels with visual indicators provide a reference to performers such that a proper combination of instrument inputs may be selected at the appropriate time. The visual indicators include colors and/or shapes. The visual indicators may be presented using differently-colored lyrical text, where each color corresponds to a set of notes. Each set of notes may for a chordal group such as a triad. The visual indicators may be associated with labels that are able to be adhered to various instrument inputs such as keys of a keyboard or piano. 1. A method that provides interactive music playback media , the method comprising:establishing a base key;receiving a selection of a media item;extracting musical parameters associated with the media item, the musical parameters including at least a target key;transposing the media item from the target key to the base key; andlinking the transposed media item to at least one musical parameter associated with the media item.2. The method of claim 1 , wherein the at least one musical parameter includes lyrics.3. The method of claim 2 , wherein:extracting musical parameters comprises identifying a chord progression associated with the media item, and 'generating a text-based lyrical representation where each text element is represented in a color associated with at least one musical note.', 'linking the transposed media item comprises4. The method of claim 3 , wherein each color is associated with a bass key label and a plurality of harmony key labels.5. The method of claim 4 , wherein each key label is associated with a key of at least one of a piano and keyboard.6. The method of claim 1 , wherein the base key is associated with a particular target instrument.7. The method of claim 1 , wherein the musical parameters further include a chord progression and a set of lyrics. This application is a divisional of U.S. patent Application Ser. No. 14/798,317, filed on Jul. 13, 2015. U.S. ...

Подробнее
12-01-2017 дата публикации

Method for Distinguishing Components of an Acoustic Signal

Номер: US20170011741A1

A method distinguishes components of a signal by processing the signal to estimate a set of analysis features, wherein each analysis feature defines an element of the signal and has feature values that represent parts of the signal, processing the signal to estimate input features of the signal, and processing the input features using a deep neural network to assign an associative descriptor to each element of the signal, wherein a degree of similarity between the associative descriptors of different elements is related to a degree to which the parts of the signal represented by the elements belong to a single component of the signal. The similarities between associative descriptors are processed to estimate correspondences between the elements of the signal and the components in the signal. Then, the signal is processed using the correspondences to distinguish component parts of the signal.

Подробнее
14-01-2016 дата публикации

AUDIO MATCHING WITH SUPPLEMENTAL SEMANTIC AUDIO RECOGNITION AND REPORT GENERATION

Номер: US20160012807A1
Принадлежит:

System, apparatus and method for determining semantic information from audio, where incoming audio is sampled and processed to extract audio features, including temporal, spectral, harmonic and rhythmic features. The extracted audio features are compared to stored audio templates that include ranges and/or values for certain features and are tagged for specific ranges and/or values. The semantic information may be associated with audio codes to determine changing characteristics of identified media during a time period. 120.-. (canceled)21. A processor-based method for producing supplemental information for media containing an embedded audio code , the method comprising:obtaining an audio code from a device during a first time period, the audio code representing a first characteristic of the media, wherein the audio code is read from an audio portion of the media;obtaining first semantic audio signature data from the device during the first time period, the first semantic audio signature data being a measure of at least one of a temporal feature, a spectral feature, a harmonic feature, or a rhythmic feature relating to a second characteristic of the media; andassociating the audio code of the first time period to a second time period when the processor determines that a second semantic audio signature data of the second time period substantially matches the first semantic audio signature data for the first time period.22. The method of claim 21 , wherein:the temporal feature includes at least one of amplitude, power, or zero crossing of at least some of the audio of the media;the spectral feature includes at least one of a spectral centroid, a spectral rolloff, a spectral flux, a spectral flatness measure, a spectral crest factor, Mel-frequency cepstral coefficients, Daubechies wavelet coefficients, a spectral dissonance, a spectral irregularity or a spectral inharmonicity of at least some of the audio of the media;the harmonic feature includes at least one of a ...

Подробнее
10-01-2019 дата публикации

DEVICE CONFIGURATIONS AND METHODS FOR GENERATING DRUM PATTERNS

Номер: US20190012995A1

The present disclosure relates to methods and devices for generating drum patterns. In one embodiment a method includes receiving a user generated input including a plurality of events during a time interval, and detecting the events. The method also includes analyzing the events to define a rhythmic pattern based on number of events detected, placement of each event in the time interval, and duration of the time interval. Each of the plurality of events may be classified into at least one type of drum pattern element and a drum pattern may be generated based on the rhythmic pattern to include a drum element for each event of the rhythmic pattern. In certain embodiments, pitch or tone of the events may be determined to classify events as components of a drum pattern. Processes and devices allow for professional sounding drum patterns to be output based on received input. 1. A method for generating a drum pattern , the method comprising:receiving a user generated input including a plurality of events during a time interval;detecting the plurality of events in the user generated input; number of events detected,', 'placement of each event in the time interval, and', 'duration of the time interval,', 'wherein analyzing includes classifying each of the plurality of events into at least one type of drum pattern element; and, 'analyzing the plurality of events to define a rhythmic pattern based on'}generating a drum pattern based on the rhythmic pattern, wherein the drum pattern includes a drum element for each event of the rhythmic pattern.2. The method of claim 1 , wherein the user generated input is an audio signal received from at least one of a musical instrument and microphone claim 1 , the audio signal indicating a desired groove for the drum pattern.3. The method of claim 1 , wherein the user generated input is a percussive beat tapped as input to a device claim 1 , the percussive beat indicating a desired groove for the drum pattern.4. The method of claim 1 , ...

Подробнее
10-01-2019 дата публикации

TECHNIQUES FOR DYNAMIC MUSIC PERFORMANCE AND RELATED SYSTEMS AND METHODS

Номер: US20190012997A1
Автор: KATZ Shelley
Принадлежит: Symphonova, Ltd.

According to some aspects, an apparatus is provided for controlling the production of music, the apparatus comprising at least one processor, and at least one processor-readable storage medium comprising processor-executable instructions that, when executed, cause the at least one processor to receive data indicative of acceleration of a user device, detect that the acceleration of the user device has exceeded a predetermined threshold based at least in part on the received data, determine that no beat point has been triggered by the apparatus for at least a first period of time, and trigger a beat point in response to said detecting that the acceleration of the user device has exceeded the predetermined threshold and said determining that no beat point has been triggered for at least the first period of time. 1. An apparatus for controlling the production of music , the apparatus comprising:at least one processor; and receive data indicative of acceleration of a user device;', 'detect whether the acceleration of the user device has exceeded a predetermined threshold based at least in part on the received data;', 'determine whether a beat point has been triggered by the apparatus within a prior period of time; and', 'trigger a beat point when the acceleration of the user device is detected to have exceeded the predetermined threshold and when no beat point is determined to have been triggered during the prior period of time., 'at least one processor-readable storage medium comprising processor-executable instructions that, when executed, cause the at least one processor to2. The apparatus of claim 1 , wherein the processor-executable instructions claim 1 , when executed by the at least one processor claim 1 , further cause the at least one processor to generate acoustic data according to a digital musical score in response to the beat point trigger.3. The apparatus of claim 2 , wherein a tempo of the acoustic data generated according to the digital musical score is ...

Подробнее
10-01-2019 дата публикации

ELECTROPHONIC CHORDOPHONE SYSTEM, APPARATUS AND METHOD

Номер: US20190012998A1
Принадлежит:

Provided is an electrophonic chordophone system () comprising sensor () configured to be operatively responsive to respective strings of a guitar (). The system () also includes non-transitory processor-readable storage means () which contains first and second user-configurable tonal formats () and (). Also included is processor (), arranged in signal communication with the sensor () and storage means (), and adapted to associate a melody group of notes producible by the strings with the first tonal format () in a one-to-one mapping or direct correlation, and to associate a control group of notes producible by the strings with the second tonal format () in a one-to-many mapping or indirect correlation. Also included is a synthesiser (), arranged in signal communication with the processor (), and configured to produce both the first and second tonal formats simultaneously in substantial real-time. The first tonal format () is actuatable via the melody group of notes and the second tonal format () is dynamically selectable via the control group of notes and also actuatable via the melody group of notes. In this manner, a melody is producible via the first tonal format () with a dynamic backing track producible via the second tonal format (), all using one guitar (). 1. An electrophonic chordophone system comprising:a sensor operatively responsive to respective strings of a guitar;non-transitory processor-readable storage means containing first and second user-configurable tonal formats; i) allow a user to associate a melody group of notes producible by the strings with the first tonal format in a one-to-one mapping, and', 'ii) allow the user to associate a control group of notes producible by the strings with the second tonal format in a one-to-many mapping, said control group forming a subset of the melody group; and, 'a processor arranged in signal communication with the sensor and storage means, said processor adaptable toa synthesiser arranged in signal ...

Подробнее
10-01-2019 дата публикации

APPARATUS AND METHOD FOR HARMONIC-PERCUSSIVE-RESIDUAL SOUND SEPARATION USING A STRUCTURE TENSOR ON SPECTROGRAMS

Номер: US20190012999A1
Принадлежит:

An apparatus for analysing a magnitude spectrogram of an audio signal is provided. The apparatus includes a frequency change determiner being configured to determine a change of a frequency for each time-frequency bin of a plurality of time-frequency bins of the magnitude spectrogram of the audio signal depending on the magnitude spectrogram of the audio signal. Moreover, the apparatus includes a classifier being configured to assign each time-frequency bin of the plurality of time-frequency bins to a signal component group of two or more signal component groups depending on the change of the frequency determined for the time-frequency bin. 1. An apparatus for analysing a magnitude spectrogram of an audio signal , comprising:a frequency change determiner being configured to determine a change of a frequency for each time-frequency bin of a plurality of time-frequency bins of the magnitude spectrogram of the audio signal depending on the magnitude spectrogram of the audio signal, anda classifier being configured to assign each time-frequency bin of the plurality of time-frequency bins to a signal component group of two or more signal component groups depending on the change of the frequency determined for said time-frequency bin.2. The apparatus according to claim 1 ,wherein the frequency change determiner is configured to determine the change of the frequency for each time-frequency bin of the plurality of time-frequency bins depending on an angle for said time-frequency bin, wherein the angle for said time-frequency bin depends on the magnitude spectrogram of the audio signal.3. The apparatus according to claim 2 ,wherein the frequency change determiner is configured to determine the change of the frequency for each time-frequency bin of the plurality of time-frequency bins further depending on a sampling frequency of the audio signal, and depending on a length of an analysis window and depending on a hop size of the analysis window.5. The apparatus according to ...

Подробнее
14-01-2021 дата публикации

Electronic musical instrument, electronic musical instrument control method, and storage medium

Номер: US20210012758A1
Принадлежит: Casio Computer Co Ltd

An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.

Подробнее
09-01-2020 дата публикации

BEAT DECOMPOSITION TO FACILITATE AUTOMATIC VIDEO EDITING

Номер: US20200013379A1
Автор: Vaucher Christophe
Принадлежит:

The disclosed technology relates to a process for detecting musical artifacts within a musical composition. The detection of musical artifacts is based on analyzing the energy and frequency of the digital signal of the musical composition. The identification of musical artifacts within a musical composition would be used in connection with audio-video editing. 1. A computer-implemented method for identifying musical artifacts , the method comprising:receiving a primary waveform representing a musical composition, wherein the musical composition comprises a plurality of musical artifacts;filtering the primary waveform to generate alternative waveforms associated with the plurality of musical artifacts; andautomatically analyzing the alternative waveforms to identify time points in the primary waveform that correspond with the plurality of musical artifacts.2. The computer-implemented method of claim 1 , wherein the filtering of the primary waveform comprises a first filtering process using two or more interlaced band pass filters that outputs two or more secondary waveforms claim 1 , and wherein the first filtering process comprises:a) calculating samples' module of the two or more secondary waveforms,b) identifying the samples' module that exceeds a first predetermined frequency range threshold, wherein each of the musical artifacts has a different frequency range threshold,c) identifying frequency ranges for each of the musical artifacts which one of the two or more secondary waveforms is featuring a most samples' module exceeding the first predetermined frequency range threshold, andd) identifying a preliminary list of the musical artifacts based on the samples' module of the two or more secondary waveforms featuring the most samples' module exceeding the first predetermined frequency range threshold for each of the musical artifacts.3. The computer-implemented method of claim 2 , wherein the filtering of the primary waveform further comprises a second filtering ...

Подробнее
09-01-2020 дата публикации

MUSICAL BEAT DETECTION SYSTEM AND METHOD FOR LIGHTING CONTROL

Номер: US20200015339A1
Автор: Zhang Michael Weibin
Принадлежит: Fourstar Group Inc.

A system for controlling a plurality of light sources particularly LED's that are provided in a number of different arrangements including a light string and controlled from a controller that includes an input microphone for detecting an audio signal, at least one pre-amplifier, a microcomputer unit receiving signals from the pre-amplifier, and a circuit for driving a plurality of LED and that enables lighting control of the plurality of LED's in accordance with the input audio signal and within a wide dynamic range. 1. A system for controlling a plurality of light sources particularly LED's that are provided in a number of different arrangements including a light string and controlled from a controller that includes an input microphone for detecting an audio signal , at least one pre-amplifier , a microcomputer unit receiving signals from the pre-amplifier , and a circuit for driving a plurality of LED and that enables lighting control of the plurality of LED's in accordance with the input audio signal and within a wide dynamic range.2. The system of including adjusting the threshold parameters TTH and BTH based upon tempo TP.3. The system of including adjusting TTH based upon audio signal magnitude M.4. The system of including first and second series-connected pre-amplifiers and changing pre-amplifiers as an input to the MCU claim 1 , based upon the level of sensed magnitude M and/or the level of tempo T.5. A method for controlling lighting by audio claim 1 , including:receiving audio signals and amplifying said audio signals;{'b': '1', 'detecting any waveform with a level greater than a predefined level threshold and calculate the lasting time (T) of the detected waveform;'}{'b': '1', 'comparing the lasting time T to a predefined time threshold (TTH), and if greater then keep the waveform as one bass waveform, or otherwise discard the waveform;'}counting the number of bass waveform within a pre-defined time window BTW to determine a bass beat; andusing the bass ...

Подробнее
21-01-2016 дата публикации

PLAYBACK DEVICE, PLAYBACK METHOD, AND STORAGE MEDIUM

Номер: US20160019024A1
Принадлежит: CASIO COMPUTER CO., LTD.

A playback device includes a first buffer and a second buffer, each having storage regions, and a processing unit. The processing unit performs: a storage process that causes input audio data to be stored in the storage regions of the first buffer in order; a first playback process that causes the stored audio data to be played back in the order in which the audio data was stored; a designation process that designates, in response to a user input, at least one of the plurality of storage regions of the first buffer in which the audio data is stored; a copy process that causes the audio data stored in the designated storage region of the first buffer to be copied to the second buffer; and a second playback process that causes the audio data copied to the second buffer to be repeatedly played back.

Подробнее
21-01-2016 дата публикации

Machine-control of a device based on machine-detected transitions

Номер: US20160019876A1
Принадлежит: JPMorgan Chase Bank NA

Apparatus, methods, and systems that operate to perform machine-control of a device based on machine-detected transitions are disclosed.

Подробнее
21-01-2016 дата публикации

AUDIO SIGNAL PROCESSING METHODS AND SYSTEMS

Номер: US20160019878A1
Автор: Brown Matthew
Принадлежит:

Described are methods and systems of identifying one or more fundamental frequency component(s) of an audio signal. The methods and systems may include any one or more of an audio event receiving step, a signal discretization step, a masking step, and/or a transcription step. 1. A method of identifying at least one fundamental frequency component of an audio signal , the method comprising:(a) filtering the audio signal to produce a plurality of sub-band time domain signals;(b) transforming a plurality of sub-band time domain signals into a plurality of sub-band frequency domain signal by mathematical operators;(c) summing together a plurality of sub-band frequency domain signals to yield a single spectrum;(d) calculating the bispectrum of a plurality of sub-band time domain signals;(e) summing together the bispectra of a plurality of sub-band time domain signals;(f) calculating the diagonal of a plurality of the summed bispectra;(g) multiplying the single spectrum and the diagonal of the summed bispectra to produce a product spectrum; and(h) identifying at least one fundamental frequency component of the audio signal from the product spectrum or information contained in the product spectrum.2. The method according to claim 1 , further comprising receiving an audio event and converting the audio event into the audio signal.3. The method according to claim 1 , wherein at least one identifiable fundamental frequency component is associated with a known audio event claim 1 , wherein identification of at least one fundamental frequency component enables identification of at least one corresponding known audio event present in the audio signal.4. The method according to claim 1 , wherein the method further comprises visually representing on a screen or other display means at least one selected from the group consisting of:the product spectrum;information contained in the product spectrum;identifiable fundamental frequency components; anda representation of identifiable ...

Подробнее
03-02-2022 дата публикации

REPRODUCTION CONTROL METHOD, REPRODUCTION CONTROL SYSTEM, AND REPRODUCTION CONTROL APPARATUS

Номер: US20220036866A1
Принадлежит:

A computer-implemented reproduction control method includes reproducing sound from sound data representing a series of sounds including first sound and second sound that follows the first sound. The method includes starting reproducing the first sound, continuing the reproduction of a first sound until an end of the first sound in response to receiving a first instruction in a reproduction period of the first sound, stopping the reproduction of the first sound, and after the stopping of the reproduction of the first sound, starting reproducing the second sound in response to receiving a second instruction provided by a user. 1. A computer-implemented reproduction control method of reproducing sound from sound data representing a series of sounds including first sound and second sound that follows the first sound , the method comprising:starting reproducing the first sound;continuing the reproduction of a first sound until an end of the first sound in response to receiving a first instruction in a reproduction period of the first sound;stopping the reproduction of the first sound; andafter the stopping of the reproduction of the first sound, starting reproducing the second sound in response to receiving a second instruction provided by a user.2. The reproduction control method according to claim 1 , wherein:the sound data is performance data representative of a sounding period for each sound of the series of sounds, andin response to receiving the first instruction, the first sound is reproduced continuously until an end of the sounding period of the first sound specified by the performance data.3. The reproduction control method according to claim 1 , wherein the first instruction and the second instruction are each generated in accordance with a manipulation of a manipulation device by the user.4. The reproduction control method according to claim 3 , wherein:the first instruction is generated in response to a manipulation by the user to shift a state of the ...

Подробнее
03-02-2022 дата публикации

AUDIO PROCESSING TECHNIQUES FOR SEMANTIC AUDIO RECOGNITION AND REPORT GENERATION

Номер: US20220036869A1
Принадлежит:

Example methods, apparatus and articles of manufacture to determine semantic information for audio are disclosed. Example apparatus disclosed herein are to process an audio signal obtained by a media device to determine values of a plurality of features that are characteristic of the audio signal, compare the values of the plurality of features to a first template having corresponding first ranges of the plurality of features to determine a first score, the first template associated with first semantic information, compare the values of the plurality of features to a second template having corresponding second ranges of the plurality of features to determine a second score, the second template associated with second semantic information, and associate the audio signal with at least one of the first semantic information or the second semantic information based on the first score and the second score. 120-. (canceled)21. An apparatus comprising:at least one memory;computer readable instructions in the apparatus; and process a first frame of an audio signal to determine first values of a plurality of features of the audio signal;', 'associate the first frame of the audio signal with at least one of first semantic information or second semantic information based on comparison of the first values of the plurality of features to a first template associated with the first semantic information and comparison of the first values of the plurality of features to a second template associated with the second semantic information, the first template having corresponding first ranges of the plurality of features, the second template having corresponding second ranges of the plurality of features;', 'process a second frame of the audio signal to determine second values of the plurality of features; and', 'associate the second frame of the audio signal with at least one of the first semantic information or the second semantic information based on comparison of the second values of ...

Подробнее
18-01-2018 дата публикации

System for mixing a video track with variable tempo music

Номер: US20180018898A1
Автор: Scott Humphrey
Принадлежит: Jammit Inc

The teachings described herein are generally directed to a system, method, and apparatus for separating and mixing tracks within music. The system can have a video that is synchronized with the variations in the musical tempo through a variable timing reference track designed and provided for a user of the preselected piece of music that was prerecorded, wherein the designing of the variable timing reference track includes creating a tempo map having variable tempos, rhythms, and beats using notes from the preselected piece of music.

Подробнее
18-01-2018 дата публикации

SYSTEM FOR EMBEDDING ELECTRONIC MESSAGES AND DOCUMENTS WITH AUTOMATICALLY-COMPOSED MUSIC USER-SPECIFIED BY EMOTION AND STYLE DESCRIPTORS

Номер: US20180018948A1
Автор: Silverstein Andrew H.
Принадлежит: Amper Music, Inc.

An automated music composition and generation system allowing uses to create and deliver electronic messages and documents such as text, SMS and email, augmented with automatically-composed music generated using user-selected music emotion and style descriptors. The automated music composition and generation system includes an automated music composition and generation engine operably connected to a system user interface, and the infrastructure of the Internet. Mobile and desktop client machines provide text, SMS and/or email services supported on the Internet. Each client machine has a text application, SMS application and/or email application that is augmented by the addition of automatically-composed music by users using the automated music composition and generation engine. By selecting and providing musical emotion and style descriptor icons to the engine, music is automatically composed, generated, and embedded in text, SMS and/or email messages for delivery to other client machines over the infrastructure of the Internet. 1. An Internet-based automated music composition and generation system allowing uses to create and deliver text , SMS and email messages augmented with automatically composed music generated using user-selected music emotion and style descriptors , said Internet-based automated music composition and generation system comprising:an automated music composition and generation engine operably connected to a system user interface, and the infrastructure of the Internet; andplurality of mobile and desktop client machines providing text, SMS and email services supported on the Internet;wherein each said client machine has a text application, SMS application and email application that can be augmented by the addition of automatically composed music by users using said automated music composition and generation engine, by selecting musical emotion descriptor icons, and musical style descriptor icons, that are provided to said automated music ...

Подробнее
16-01-2020 дата публикации

MEDIA CONTENT SYSTEM FOR ENHANCING REST

Номер: US20200019371A1
Автор: JEHAN Tristan, RANDO Mateo
Принадлежит: SPOTIFY AB

A media-playback device acquires a heart rate, selects a song with a first tempo, and initiates playback of the song. The song meets a set of qualification criteria and the first tempo is based on the heart rate, such as being equal to or less than the heart rate. The media-playback device also initiates playback of a binaural beat at a first frequency. Over a period of time, the binaural beat's first frequency is changed to a second frequency. Over the period of time, the first tempo can also be changed to a second tempo, where the second tempo is slower than the first tempo. 120-. (canceled)21. A method for selecting and playing a song with a mobile device , the method comprising:receiving a request from a user to play a song;receiving a context selection for a context for playback;detecting a user heart rate using a sensor and a light source of the mobile device, while the sensor and the light source are directed toward a part of the user's body;selecting a first song with a first tempo, wherein the first tempo is based on the user heart rate and the context selection; andinitiating playback of the first song with the first tempo on the mobile device.22. The method of claim 21 , wherein the first song with a first tempo is a first version of the first song claim 21 , and further comprising:receiving a change tempo input; andinitiating playback of a second version of the first song, wherein the second version of the first song has a second tempo.23. The method of claim 22 , further comprising:while initiating playback of the second version of the first song, initiating cross-fading between the first version of the first song and the second version of the first song; andafter initiating cross-fading, stopping playback of the first version of the first song.24. The method of claim 23 , wherein the first tempo is faster than the second tempo.25. The method of claim 23 , wherein a plurality of versions of the first song are stored on the mobile device claim 23 , each ...

Подробнее
21-01-2021 дата публикации

Intelligent system for matching audio with video

Номер: US20210020149A1
Автор: Li Tzu-Hui
Принадлежит:

An intelligent system for matching audio with video of the present invention provides a video analysis module targeting color tone, storyboard pace, video dialogue, length and category and director's special requirement, actors expression, movement, weather, scene, buildings, spacial and temporal, things and a music analysis module targeting recorded music form, sectional turn, style, melody and emotional tension, and then uses an AI matching module to adequately match video of the video analysis module with musical characteristics of the music analysis module, so as to quickly complete a creative composition selection function with respect to matching audio with a video. 1. An intelligent system for matching audio with video , comprising:a video analysis module for making an analysis according to color tone, storyboard pace, video dialogue, length and category, director's special requirement, and characteristic, actors expression, movement, weather, scene, buildings, spatial and temporal factors, things, creature, character, character personality;a music analysis module for making an analysis according to recorded music form, sectional turn, style, genre, melody, tempo, instrument, chord accompaniment, voice type, rhythm, volume and emotional tension, wherein said music analysis and content comprise a music property analysis, an emotion analysis and music characteristic information;an AI matching module for connecting to the video analysis module and the music analysis module so as to adequately match a video with a musical characteristic; anda music editing module connected to the AI matching module, so as to impeccably match a time axis with an impact point between a music file and a video file by means of clip cutting and editing, music editing, music volume adjustment and sound field simulation.2. The intelligent system for matching audio with video according to claim 1 , wherein the video analysis module comprises an analysis of a color function and a color ...

Подробнее
17-01-2019 дата публикации

LIGHTING CONTROL DEVICE, LIGHTING CONTROL METHOD AND LIGHTING CONTROL PROGRAM

Номер: US20190021153A1
Принадлежит:

A lighting controller is configured to make lighting control data in which lighting control information corresponding to a music piece is recorded and to control a lighting apparatus based on the lighting control data. The lighting controller includes: an information acquisition unit configured to acquire beat position information indicating beat positions of music piece data; a reference position plotting unit configured to plot a plurality of reference positions corresponding one-to-one to the beat positions of the beat position information; a lighting control information setting unit configured to set the lighting control information corresponding to the music piece; and a lighting control information editing unit configured to record the lighting control information with reference to the reference positions. 1. A lighting controller configured to generate lighting control data in which lighting control information corresponding to a music piece is registered and to control a lighting fixture based on the lighting control data , the lighting controller comprising:an information acquisition unit configured to acquire beat position information indicating beat positions of the music piece;a reference position plotting unit configured to plot a plurality of reference positions corresponding to the beat positions of the beat position information;a lighting control information setting unit configured to set the lighting control information corresponding to the music piece; anda lighting control information editing unit configured to register the lighting control information with reference to the reference positions.2. The lighting controller according to claim 1 , whereinthe lighting control information setting unit comprises a display unit and an operation unit, whereinthe display unit displays an image indicating the music piece and a grid indicating each of the reference positions over the image, andthe lighting control information is allowed to be set in the grid ...

Подробнее
24-01-2019 дата публикации

ENHANCING MUSIC FOR REPETITIVE MOTION ACTIVITIES

Номер: US20190022351A1
Принадлежит:

A method of providing repetitive motion therapy comprising providing access to audio content; selecting audio content for delivery to a patient; performing an analysis on the selected audio content, the analysis identifying audio features of the selected audio content, and extracting rhythmic and structural features of the selected audio content; performing an entrainment suitability analysis on the selected audio content; generating entrainment assistance cue(s) to the selected audio content, the assistance cue(s) including a sound added to the audio content; applying the assistance cues to the audio content simultaneously with playing the selected audio content; evaluating a therapeutic effect on the patient, wherein the selected audio content continues to play when a therapeutic threshold is detected, and a second audio content is selected for delivery to the patient when a therapeutic threshold is not detected. 1. A method of providing repetitive motion therapy comprising:providing access to audio content;selecting audio content for delivery to a patient; the analysis identifying audio features of the selected audio content, and', 'extracting rhythmic and structural features of the selected audio content;, 'performing an analysis on the selected audio content,'}performing an entrainment suitability analysis on the selected audio content;generating entrainment assistance cue(s) to the selected audio content, the assistance cue(s) including a sound added to the audio content;applying the assistance cues to the audio content simultaneously with playing the selected audio content; 'wherein the selected audio content continues to play when a therapeutic threshold is detected, and a second audio content is selected for delivery to the patient when a therapeutic threshold is not detected.', 'evaluating a therapeutic effect on the patient,'}2. The method of claim 1 , further comprising updating a database of audio content to integrate feedback from the evaluating step.3 ...

Подробнее
28-01-2021 дата публикации

Device and method for training the muscles and connective tissue in the oral cavity and throat of a person, particularly for long-term avoidance of airway and sleep disorders and the consequences thereof

Номер: US20210023329A1
Принадлежит: Asate AG

A device for training the muscles and connective tissue in the oral cavity and throat of a person, particularly for long-term avoidance of airway and sleep disorders and the consequences thereof, includes a wind instrument and furthermore an apparatus for detecting a sound produced by a person by the wind instrument. The device further includes an apparatus for evaluating the detected sound and an apparatus for outputting a response defined in dependence on a result of the evaluation to the person playing the wind instrument.

Подробнее
28-01-2016 дата публикации

Wearable sound

Номер: US20160027338A1
Принадлежит: Not Impossible LLC

Vibratory motors are used to generate a haptic language for music or other sound that is integrated into wearable technology. The disclosed system enables the creation of a family of devices that allow people with hearing impairments to experience sounds such as music or other auditory input to the system. For example, a “sound vest” or one or more straps comprising a set of motors transforms musical input to haptic signals so that users can experience their favorite music in a unique way, and can also recognize auditory cues in the user's everyday environment and convey this information to the user using haptic signals.

Подробнее
28-01-2016 дата публикации

AUDIO SIGNAL ANALYSIS

Номер: US20160027421A1
Принадлежит:

An apparatus comprises a dereverberation module for generating a dereverberated audio signal based on an original audio signal containing reverberation, and an audio-analysis module for generating audio analysis data based on audio analysis of the original audio signal and audio analysis of the dereverberated audio signal. 150-. (canceled)51. A method comprising:generating a dereverberated audio signal based on an original audio signal containing reverberation; andgenerating audio analysis data based on audio analysis of the original audio signal and audio analysis of the dereverberated audio signal.52. The method of claim 51 , comprising performing audio analysis using the original audio signal and the dereverberated audio signal.53. The method of claim 51 , comprising performing audio analysis on one of original audio signal and the dereverberated audio signal based on results of the audio analysis of the other one of the original audio signal and the dereverberated audio signal.54. The method of claim 53 , comprising performing audio analysis on the original audio signal based on results of the audio analysis of the dereverberated audio signal.55. The method of claim 51 , comprising generating the dereverberated audio signal based on results of the audio analysis of the original audio signal.57. The method of claim 56 , comprising performing beat period determination analysis on the dereverberated audio signal and performing beat time determination analysis on the original audio signal.58. The method of claim 57 , comprising performing beat time determination analysis on the original audio signal based on results of the beat period determination analysis.59. The method of claim 51 , comprising analysing the original audio signal to determine if the original audio signal is derived from speech or from music and performing the audio analysis in respect of the dereverberated audio signal based on the determination as to whether the original audio signal is derived ...

Подробнее
26-01-2017 дата публикации

Electronic device, method and computer program

Номер: US20170026748A1
Принадлежит: Sony Corp

An electronic device comprising a processing unit arranged to determine an estimation signal (y(k)) based on an input signal (x(k)) and based on a non-stationary reference signal (s 0 (k)).

Подробнее
10-02-2022 дата публикации

NETWORK MUSICAL INSTRUMENT

Номер: US20220044661A1
Автор: Elson Michael John
Принадлежит:

Methods and systems are described that are utilized for remotely controlling a musical instrument. A first digital record comprising musical instrument digital commands from a first electronic instrument for a first item of music is accessed. The first digital record is transmitted over a network using a network interface to a remote, second electronic instrument for playback to a first user. Optionally, video data is streamed to a display device of a user while the first digital record is played back by the second electronic instrument. A key change command is transmitted over the network using the network interface to the second electronic instrument to cause the second electronic instrument to playback the first digital record for the first item of music in accordance with the key change command. The key change command may be transmitted during the streaming of the video data. 1. (canceled)2. A networked music instrument system , comprising:a network interface;a computing device; and transmitting musical instrument digital commands of a first electronic instrument, associated with a first user, for a first item of music over a network using the network interface to a plurality of electronic instruments, remote from the first electronic instrument, associated with a first plurality of respective users for reproduction, at a same time, by respective remote electronic instruments in the plurality of electronic instruments;', 'setting, in response to detecting a key range command, a key range for at least one of the plurality of remote electronic instruments comprising a lowest note boundary and a highest note boundary;', 'causing, in response to detecting a transpose command, a transposition of at least a portion of the first item of music to a first key to be performed on at least one of the plurality of remote electronic instruments;', 'muting audio of a portion of the first plurality of respective users, wherein audio of a second user in the first plurality of ...

Подробнее
10-02-2022 дата публикации

INTELLIGENT ACCOMPANIMENT GENERATING SYSTEM AND METHOD OF ASSISTING A USER TO PLAY AN INSTRUMENT IN A SYSTEM

Номер: US20220044666A1
Принадлежит: Positive Grid LLC

The intelligent accompaniment generating system includes an input module, an analysis module, a generation module and a musical equipment. The input module is configured to receive a musical pattern signal derived from a raw signal. The analysis module is configured to analyze the musical pattern signal to extract a set of audio features, wherein the input module is configured to transmit the musical pattern signal to the analysis module. The generation module is configured to obtain a playing assistance information having an accompaniment pattern from the analysis module, wherein the accompaniment pattern has at least two parts having different onsets therebetween, and each onsets of the at least two parts is generated by an algorithm according to the set of audio features. The musical equipment includes a digital amplifier configured to output an accompaniment signal according to the accompaniment pattern. 1. A method for assisting a user to play an instrument in a system including an input module , an analysis module , a generating module , an output module and a musical equipment having a computing unit , a digital amplifier and a speaker , the method comprising steps of:receiving an instrument signal by the input module;analyzing an audio signal to extract a set of audio features by the analysis module, wherein the audio signal includes one of the instrument signal and a musical signal from a resource;generating a playing assistance information according to the set of audio features by the generating module;processing the instrument signal with a DSP algorithm to simulate amps and effects of bass or guitar on the instrument signal to form a processed instrument signal by the computing unit;amplifying the processed instrument signal by the digital amplifier;amplifying at least one of the processed instrument signal and the musical signal by the speaker; andoutputting the playing assistance information by the output module to the user.2. The method as claimed in ...

Подробнее
25-01-2018 дата публикации

METHOD AND SYSTEM FOR ANALYSING SOUND

Номер: US20180027347A1
Принадлежит:

The present invention relates to a method and system for analysing audio (eg. music) tracks. A predictive model of the neuro-physiological functioning and response to sounds by one or more of the human lower cortical, limbic and subcortical regions in the brain is described. Sounds are analysed so that appropriate sounds can be selected and played to a listener in order to stimulate and/or manipulate neuro-physiological arousal in that listener. The method and system are particularly applicable to applications harnessing a biofeedback resource. 1. A computer implemented system for analysing sounds , such as audio tracks , or any other types of sounds , the system including a processor programmed for automatically analysing sounds according to musical parameters derived from or associated with a predictive model of the neuro-physiological functioning and response to sounds by one or more of the human lower cortical , limbic and subcortical regions in the brain;and in which the system analyses sounds so that appropriate sounds can be selected and played to a listener in order to stimulate and/or manipulate neuro-physiological arousal in that listener.2. The system of in which the system is adapted to automatically analyse sounds and store the results of that analysis in a database so that appropriate sounds can subsequently be selected from that database and played to a listener to provide a desired stimulation and/or manipulation of neuro-physiological arousal in that listener.3. The system of where the musical parameters relate to rhythmicity.4. The system of where the musical parameters relate to harmonicity claim 1 , being the degree of correspondence to the harmonic series.5. The system of where the musical parameters relate to turbulence claim 1 , being a measure of rate of change and extent of change in musical experience.6. The system of claim 3 , which predictively models primitive spinal pathways and the pre-motor loop (such as the basal ganglia claim 3 , ...

Подробнее
28-01-2021 дата публикации

SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE

Номер: US20210028875A1
Принадлежит:

A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event. 1. A computer implemented method for synchronizing an audio or MIDI file with a video file , the method including:receiving a first audio or MIDI file,receiving a video file, synchronizing the first audio or MIDI file with the video file,', 'marking an event in the video file at a point on a timeline,', 'detecting a first musical key for the event,', 'retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and', 'placing the musical stinger or swell at the point of the timeline marked for the event., 'operating an audio synchronization module to perform steps of2. The method of claim 1 , including a step of operating the audio synchronization module to drag the musical stinger or swell along the timeline manually to adjust the placement of the musical stinger or swell in the video file.3. The method of claim 1 , including a step of operating the audio synchronization module to mark a plurality of events in the video file claim 1 , and for each event in the plurality of events the audio synchronization module performs steps of:detecting a first musical key for the event,retrieving ...

Подробнее
02-02-2017 дата публикации

Integrated system and method providing users with non-destructive ways of manipulating musical scores and or audio recordings for student practise, testing and assessment.

Номер: US20170032691A1
Принадлежит: Rising Software Australia Pty Ltd.

An integrated system and method that provides users with non-destructive ways of manipulating musical scores and or audio recordings for student practise, testing and assessment. The said integrated system and method allows users to create questions that utilise one or more of the manipulated items, present questions to a student, collect the said students answer and assess the said students answer. The said integrated system is delivered via digital devices including computers, tablets, smartphones and other such devices. 1. An integrated system comprising:one or more digital devices;code that enables the non-destructive editing of music by an instructor;code that enables the creation of questions by an instructor;code that enables the presentation of questions to a student;code that enables collection of the said student's answer;code that enables assessment of the said student's answer.2. The said integrated system of wherein the said music is audio or a musical score.3. The said integrated system of wherein the said music is provided by the publisher of the said integrated system claim 1 , a third-party provider claim 1 , or provided by or created by the said instructor.4. The said integrated system of wherein the said creation of questions can be one of multiple choice or notation or tapping.5. The said integrated system of wherein the said creation of a multiple choice question comprises one of text or image or musical score or audio or a MIDI file.6. The said integrated system of wherein the said creation of a notation question comprises one of entering one or more notes on a musical score or entering one or more chord symbols on a musical score or highlighting one or more elements on a musical score.7. The said integrated system of wherein the said creation of a tapping question comprises one of tap a displayed musical score to a click track or tap along with a played musical score or remember and tap a played musical score.8. The said integrated system of ...

Подробнее
01-02-2018 дата публикации

CHAINED AUTHENTICATION USING MUSICAL TRANSFORMS

Номер: US20180032716A1
Принадлежит:

A service receives a request from a user of a group of users to perform one or more operations requiring group authentication in order for the operations to be performed. In response, the service provides a first user of the group with an image seed and an ordering of the group of users. Each user of the group applies a transformation algorithm to the seed to create an authentication claim. The service receives this claim and determines, based at least in part on the ordering of the group of users, an ordered set of transformations, which are used to create a reference image file. If the received claim matches the reference image file, the service enables performance of the requested one or more operations. 1. A computer-implemented method comprising:receiving an authentication claim comprising a media encoding;determining a set of image transformations for a media seed, the set of image transformations comprising a first image transformation associated with a first entity and a second image transformation associated with a second entity, the first image transformation being different from the second image transformation;applying the set of image transformations to the media seed to generate a reference media file;determining that the reference media file matches the media encoding of the authentication claim; anddetermining that an operation for which authentication is required is authorized.2. The computer-implemented method of claim 1 , wherein the media seed is provided in response to having received a request to perform the operation.3. The computer-implemented method of claim 1 , wherein at least one of the first image transformation and the second image transformation includes a modification of a color hue of the media seed.4. The computer-implemented method of claim 1 , wherein at least one of the first image transformation and the second image transformation includes removal of an element of the media seed.5. The computer-implemented method of claim 1 , ...

Подробнее
04-02-2016 дата публикации

Audio Processing Techniques for Semantic Audio Recognition and Report Generation

Номер: US20160035332A1
Принадлежит:

System, apparatus and method for determining semantic information from audio, where incoming audio is sampled and processed to extract audio features, including temporal, spectral, harmonic and rhythmic features. The extracted audio features are compared to stored audio templates that include ranges and/or values for certain features and are tagged for specific ranges and/or values. Extracted audio features that are most similar to one or more templates from the comparison are identified according to the tagged information. The tags are used to determine the semantic audio data that includes genre, instrumentation, style, acoustical dynamics, and emotive descriptor for the audio signal. 1. A method for forming an audio template for determining semantic audio information , comprising:extracting a plurality of audio features from audio, at least one of the plurality of audio features including at least one of a temporal feature, a spectral feature, a harmonic feature, or a rhythmic feature;determining a range for each of the plurality of audio features; andstoring a set of ranges of the plurality of ranges to compare against other audio features from subsequent audio to generate a tag for the set of ranges signifying semantic audio information for the subsequent audio, wherein the set of ranges includes more than one range.2. The method of claim 1 , wherein the tag is associated with at least one of a genre descriptor claim 1 , an instrumentation descriptor claim 1 , a style descriptor claim 1 , an acoustical dynamics descriptor claim 1 , or an emotive descriptor for the audio.3. The method of claim 1 , wherein the tag is associated with a set of ranges including an audio timbre range claim 1 , a beat range claim 1 , a loudness range and a spectral histogram range.4. The method of claim 1 , wherein the tag is associated with timber and the set of ranges includes a range for the mean of the spectral centroid claim 1 , a range for the variance of the spectral centroid ...

Подробнее
01-02-2018 дата публикации

METHOD AND SYSTEM FOR DETERMINING AND PROVIDING SENSORY EXPERIENCES

Номер: US20180033263A1
Принадлежит:

A method including: receiving a music input; determining values of musical parameters based on the input; generating a spatial representation of the music input based on the values; and at a plurality of haptic actuators defining a spatial distribution, cooperatively producing a haptic output based on the spatial representation. A method including: mechanically coupling haptic actuators defining a multidimensional array to a user; receiving a music input; generating a spatial representation of the music input defined on a multidimensional space, wherein the multidimensional space and the multidimensional array have equal dimensionality; and, for each haptic actuator: based on the haptic actuator location within the multidimensional array, determining a corresponding location within the multidimensional space; based on a value of the spatial representation associated with the corresponding location, determining an actuation intensity; and controlling the haptic actuator to actuate based on the actuation intensity. 1. A method comprising:receiving a music input;based on the music input, determining a first value of a primary musical parameter;based on the music input, determining a second value of a secondary musical parameter different than the primary musical parameter;based on the first and second values, generating a spatial representation of the music input; andat a plurality of haptic actuators defining a spatial distribution, cooperatively producing a haptic output based on the spatial representation.3. The method of claim 2 , wherein:the primary musical parameter is an instrument classification parameter; anddetermining the plurality of input subsets comprises determining a respective instrument class associated with each input subset of the plurality.4. The method of claim 3 , wherein the secondary musical parameter is a musical frequency parameter.5. The method of claim 2 , further comprising claim 2 , for each parameter value pair of the set claim 2 , ...

Подробнее
01-02-2018 дата публикации

Audio Processing Techniques for Semantic Audio Recognition and Report Generation

Номер: US20180033416A1
Принадлежит:

Example apparatus, articles of manufacture and methods to determine semantic audio information for audio are disclosed. Example methods include extracting a plurality of audio features from the audio, at least one of the plurality of audio features including at least one of a temporal feature, a spectral feature, a harmonic feature, or a rhythmic feature. Example methods also include comparing the plurality of audio features to a plurality of stored audio feature ranges having tags associated therewith. Example methods further include determining a set of ranges of the plurality of stored audio feature ranges having closest matches to the plurality of audio features, a tag associated with the set of ranges having the closest matches to be used to determine the semantic audio information for the audio. 1. A apparatus to determine semantic audio information for audio , the apparatus comprising:memory including computer readable instructions; and extract a plurality of audio features from the audio, at least one of the plurality of audio features including at least one of a temporal feature, a spectral feature, a harmonic feature, or a rhythmic feature;', 'compare the plurality of audio features to a plurality of stored audio feature ranges having tags associated therewith; and', 'determine a set of ranges of the plurality of stored audio feature ranges having closest matches to the plurality of audio features, a tag associated with the set of ranges having the closest matches to be used to determine the semantic audio information for the audio., 'a processor to execute the computer readable instructions to2. The apparatus of claim 1 , wherein the tag is associated with at least one of a genre descriptor claim 1 , an instrumentation descriptor claim 1 , a style descriptor claim 1 , an acoustical dynamics descriptor claim 1 , or an emotive descriptor for the audio.3. The apparatus of claim 1 , wherein the tag is associated with at least one of an audio timbre range ...

Подробнее
30-01-2020 дата публикации

DISPLAY CONTROL SYSTEM AND DISPLAY CONTROL METHOD

Номер: US20200034386A1
Автор: Numata Kohei, Saito Jin
Принадлежит:

A method according to one aspect of the present disclosure includes acquiring verbal data representing a verbal expression corresponding to a sound reproduced by an acoustic device, and displaying, on a display device, motion graphics including the verbal expression corresponding to the sound reproduced by the acoustic device in a form of a text in accordance with the verbal data. The displaying the motion graphics on the display device includes selecting a type of motion graphics that relates to the verbal expression corresponding to the reproduced sound from among various types of motion graphics and displaying the selected type of motion graphics on the display device. 1. A display control system comprising:an acquirer configured to acquire lyrics data of a musical piece reproduced by an acoustic device; anda display controller configured to display, on a display device, motion graphics corresponding to the musical piece reproduced by the acoustic device in accordance with the lyrics data acquired by the acquirer, determine a discrepancy between a reproduction length of the musical piece identified based on the lyrics data and a reproduction length of the musical piece determined based on the musical piece data used for reproducing the musical piece;', 'select a type of motion graphics that fits a meaning of lyrics of the reproduced musical piece from among various types of motion graphics so that a type of motion graphics having moderate changes is selected when the discrepancy is greater than a standard compared to when the discrepancy is less than a standard; and', 'display the selected type of motion graphics on the display device in harmony with progression of the musical piece reproduced by the acoustic device, the selected type of motion graphics including the lyrics of the musical piece in a form of a text., 'wherein the display controller is configured to2. (canceled)3. (canceled)4. The display control system according to claim 1 ,wherein the various ...

Подробнее
31-01-2019 дата публикации

Self-Produced Music Server and System

Номер: US20190035372A1
Автор: Yoelin Louis
Принадлежит:

An application for operating on a smart phone that records a musician's performance, either voice or instrumental, in combination with pre-recorded music. The combination allows for the auto tuning of the recording, the compression of the recording, the equalization of the recording, adding in reverb, and the audio quantization of the rhythm. Once combined, the song is transmitted to social media and/or to an online store for sale. The user can also make a video with the song. Additional marketing such as song competitions or music reviews and ratings are also provided. 1. A music oriented social media system , the system comprising: a microphone;', 'an audio signal device;', 'an audio codec, electronically connected to a microphone and an audio signal device, where in the audio codec is configured to transmit first audio signals to the audio signal device and to receive second audio signals from the microphone;', 'a memory for storing data and digital representations of the first and the second audio signals;', 'a network communications device wherein the network communications device transmits and receives data, including the digital representation of the first audio signals, from a computer network;', 'a central processing device, electrically connected to the memory, the audio codec, and the network communications device, wherein the central processing device transmits the digital representations of the first audio signals to the audio codec and receives the digital representation of the second audio signals from the audio codec, and combines the first and the second audio signals into a third audio signals by executing algorithms to mix, auto-tune, equalize, compress and audio quantize the first and the second audio signals using preset parameters, wherein the third audio signal is stored in the memory and wherein the third audio signals are incorporated into the musical piece;, 'a plurality of music producing client devices, the client devices comprising a ...

Подробнее
31-01-2019 дата публикации

MUSIC DETECTION AND IDENTIFICATION

Номер: US20190035401A1
Принадлежит: InvenSense, Inc.

A sensor processing unit comprises a microphone and a sensor processor. The sensor processor is coupled with the microphone. The sensor processor is configured to operate the microphone to capture an audio sample from an environment in which the microphone is disposed. The sensor processor is configured to perform music activity detection on the audio sample to detect for music within the audio sample. Responsive to detection of music within the audio sample, the sensor processor is configured to send a music detection signal to an external processor located external to the sensor processing unit, the music detection signal indicating that music has been detected in the environment. 1. A sensor processing unit comprising: acquire, from said microphone, an audio sample captured by the microphone from an environment in which said microphone is disposed;', 'perform music activity detection on said audio sample to detect for music within said audio sample; and', 'responsive to detection of music within said audio sample, send a music detection signal to an external processor located external to said sensor processing unit, said music detection signal indicating that music has been detected in said environment., 'a sensor processor configured to communicatively couple with a microphone, said sensor processor configured to2. The sensor processing unit of claim 1 , wherein said sensor processing unit further comprises:a buffer coupled with said sensor processor; and store said audio sample in said buffer.', 'send a buffer full indication to said external processor; and', 'responsive to said external processor failing to access contents of said buffer after expiration of a predetermined period of time following said buffer full indicator being sent, overwrite said buffer with new information., 'wherein said sensor processor is further configured to3. The sensor processing unit of claim 1 , wherein said sensor processor is further configured to:provide one of said audio ...

Подробнее
17-02-2022 дата публикации

SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE

Номер: US20220052773A1
Принадлежит:

A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event. 1. One or more non-transitory computer readable storage media containing instructions for generating an audio output file , wherein , when executed , the instructions cause one or more processors to:receive audio parameters that comprise a harmonic map and at least one of a tempo, a genre, or a mood; wherein each audio block comprises a portion of audio content from a respective audio track, and', 'wherein each audio track comprises prerecorded audio content; and, 'automatically select a unique subset of audio blocks from a group of audio blocks based on the audio parameters and at least one unique identifier associated with a selected audio block,'}generate the audio output file based on the subset of audio blocks.2. The one or more non-transitory computer readable storage media of claim 1 , wherein the audio output file comprises a MIDI output file.3. The one or more non-transitory computer readable storage media of claim 1 , wherein the audio blocks are derived from one or more musical performances.4. The one or more non-transitory computer readable storage media of claim 1 , wherein the one or more processors are further configured to adapt at least one audio block for the output file.5. The one or more non-transitory computer readable storage ...

Подробнее
12-02-2015 дата публикации

SAMPLING DEVICE AND SAMPLING METHOD

Номер: US20150040740A1
Автор: SETOGUCHI Masaru
Принадлежит: CASIO COMPUTER CO., LTD.

A sampling device includes: an obtainer that obtains a streaming audio waveform; a detector that detects a tap by a user; a designator that designates a sampling start point on the obtained audio waveform when at least a single tap has been detected by the detector, the designating being performed on the basis of one tap among a plurality of the taps when the plurality of taps are detected; a calculator that calculates a sampling duration on the basis of time intervals between the respective taps when the plurality of taps are detected, the calculating being performed from the sampling start point to a subsequent tap that is performed after the one tap; and a waveform sampler that samples the obtained audio waveform, the sampling starting from the designated sampling start point and ending in accordance with the calculated sampling duration. 1. A sampling device , comprising:an obtainer that obtains a streaming audio waveform;a detector that detects a tap by a user;a designator that designates a sampling start point on the obtained audio waveform when at least a single tap has been detected by the detector, said designating being performed on the basis of one tap among a plurality of the taps when the plurality of taps are detected;a calculator that calculates a sampling duration on the basis of time intervals between the respective taps when the plurality of taps are detected, said calculating being performed from the sampling start point to a subsequent tap that is performed after said one tap; anda waveform sampler that samples the obtained audio waveform, said sampling starting from the designated sampling start point and ending in accordance with the calculated sampling duration.2. The sampling device according to claim 1 , wherein the detector detects the taps by the user during obtaining of the streaming audio waveform inputted by the obtainer.3. The sampling device according to claim 1 , wherein the sampling device generates tempo information on the basis of ...

Подробнее
04-02-2021 дата публикации

SYSTEMS AND METHODS FOR RECOMMENDING COLLABORATIVE CONTENT

Номер: US20210034661A1
Принадлежит:

The system recommends a media content item, from among a plurality of media content items, for performance by a user. The performance can include a series of actions, which can optionally be recorded or otherwise captured to be considered as collaborative content. The system analyzes at least one physical performance property relative to a corresponding physical performance property of each of the plurality of media content items. The at least one physical performance property is determined from profile information that is associated with the user, and may include a temporal, spectral, video, audio, or other property. Based on the analysis, the system identifies the media content item as being compatible or incompatible for performance or collaboration by the user. The system generates for output, which can include storage in memory or display on a device, a recommendation of the media content item on a device. 1. A computer-implemented method for causing to be recommended a media content item from a plurality of media content items for performance by a user , the method comprising:analyzing, using control circuitry, at least one physical performance property selected from the group of an octave, pitch, adenoidal behavior, modulation, tone, accent, language, or voice type from profile information relative to a corresponding physical performance property of each of the plurality of media content items, wherein the profile information is associated with the user;determining between consuming and creating content, an intent of the user, wherein the intent comprises at least one of consuming content based on at least one physical performance property selected from the group of an octave, pitch, adenoidal behavior, modulation, tone, accent, language, or voice type and creating content based on at least one physical performance property selected from the group of an octave, pitch, adenoidal behavior, modulation, tone, accent, language, or voice type;based on the analyzing ...

Подробнее
04-02-2021 дата публикации

SYSTEMS AND METHODS FOR RECOMMENDING COLLABORATIVE CONTENT

Номер: US20210035541A1
Принадлежит:

The system generates a recommendation for collaborative content to be consumed, thus allowing a large field of content to be parsed. The content may include audio content, video content, image content, or other content. The system identifies a collaborative content and a base content upon which the collaborative content is generated. Based on analysis of the collaborative content and profile information, the system determines recommendation metric, or a score. Based on the metric, the system generates a recommendation of the collaborative content. Collaborative content that is better formed based on signal properties, favorably compared to the base content, created by more highly rated users, or formatted in a preferred way may be more strongly recommended. Signal properties include temporal, spectral, audio, or visual properties of the content. The system outputs the recommendation for storage, display, or both to provide guidance to users consuming or reviewing collaborative content. 1. A method for generating a recommendation for collaborative content , the method comprising:identifying a base content;identifying, using control circuitry, a collaborative content, wherein the collaborative content comprises the base content and an additional content combined with the base content, and wherein profile information is associated with the collaborative content;analyzing, using the control circuitry, the collaborative content to determine one or more signal properties of the collaborative content;determining, using the control circuitry, a recommendation metric of the collaborative content based at least in part on the one or more signal properties and based at least in part on profile information; andgenerating for output, a recommendation indicator indicative of the recommendation metric; andgenerating for display on a display device the recommendation indicator.2. The method of claim 1 , wherein the base content comprises base audio content claim 1 , and wherein the ...

Подробнее
11-02-2016 дата публикации

SYSTEM AND METHOD FOR SELECTIVE REMOVAL OF AUDIO CONTENT FROM A MIXED AUDIO RECORDING

Номер: US20160041807A1
Принадлежит:

Systems and techniques for removing a sound recording from an audio recording (e.g., an audio recording embedded in a media file) are presented. The system can include an identification component, a first subtraction component and a second subtraction component. The identification component identifies a sound recording in a mixed audio recording. The first subtraction component determines a local linear transformation of the sound recording and subtracts the local linear transformation of the sound recording from the mixed audio recording to generate a new mixed audio recording. The second subtraction component compares one or more segments of the sound recording with one or more corresponding segments of the new mixed audio recording and reduces a power level of the new mixed audio recording based at least in part on correlation of the one or more corresponding segments with the one or more segments. 1. A system , comprising:a processor; and identify one or more copyrighted sound recordings in a first audio file embedded in a media file;', 'apply one or more signal processing algorithms to the media file to remove at least one of the one or more copyrighted sound recordings from the first audio file to generate a second audio file; and', 'replace the first audio file embedded in the media file with the second audio file., 'a memory communicatively coupled to the processor, the memory having stored therein computer-executable instructions, the computer-executable instructions when executed by the processor cause the processor to perform steps comprising2. The system of claim 1 , wherein the at least one of the one or more copyrighted sound recordings is replaced with at least one non-copyrighted version of the at least one of the one or more copyrighted sound recordings in the second audio file media file.3. The system of claim 1 , wherein the computer-executable instructions when executed by the processor cause the processor to perform steps comprising:presenting a ...

Подробнее
11-02-2016 дата публикации

SYSTEMS AND METHODS FOR QUANTIFYING A SOUND INTO DYNAMIC PITCH-BASED GRAPHS

Номер: US20160042657A1
Принадлежит: QUANTZ COMPANY LLC

A system and method that quantifies a sound into dynamic pitch-based graphs that correlate to the pitch frequencies of the sound. The system records a sound, such as musical notes. A pitch detection algorithm identifies and quantifies the pitch frequencies of the notes. The algorithm analyzes the pitch frequencies, and graphically displays the pitch frequency and notes in real time as fluctuating circles, rectangular bars, and lines that represent variances in pitch. The algorithm comprises a modified Type 2 Normalized Square Difference Function that transforms the musical notes into the pitch frequencies. The Type 2 Normalized Square Difference Function analyzes the peaks of the pitch frequency to arrive at a precise pitch frequency, such as 440 Hertz. A Lagrangian interpolation enables comparative analysis and teaching of the pitches and notes. The algorithm also performs transformations and heuristic comparisons to generate the real time graphical representation of the pitch frequency. 1. A system for quantifying pitch and sound , the system comprises:means for recording a sound;means for identifying a pitch frequency from the sound;means for displaying a graphical representation of the pitch frequency in the form of a pitch line;means for forming at least one peak on the pitch line, the at least one peak correlating to variations in the pitch frequency; andmeans for analyzing the at least one peak.2. The system of claim 1 , wherein the sound comprises musical notes.3. The system of claim 1 , wherein the means for identifying a pitch frequency from the sound is operable with a pitch detection algorithm.4. The system of claim 3 , wherein the pitch detection algorithm comprises a modified Type 2 Normalized Square Difference Function.5. The system of claim 4 , wherein the pitch detection algorithm is configured to identify the pitch frequency.6. The system of claim 1 , further including a screen claim 1 , the screen configured to display a real time image of the ...

Подробнее
11-02-2016 дата публикации

Finding Differences in Nearly-Identical Audio Recordings

Номер: US20160042761A1
Автор: Lu Yang, Motta Giovanni
Принадлежит:

Systems and techniques are provided for finding differences in nearly-identical audio recordings. A first version of an audio recording may be received. A second version of the audio recording may be received. A difference between the first version of the audio recording and the second version of the audio recording may be determined using time domain analysis and frequency domain analysis. The difference may be stored in a difference set. The difference set may allow the first version of the audio recording to be distinguished from the second version of the audio recording. The audio recording may be a music track. The first version of the audio recording may be an explicit version of the music track. The second version of the audio recording may be an edited version of the music track. 1. A computer-implemented method performed by a data processing apparatus , the method comprising:receiving a first version of an audio recording;receiving a second version of the audio recording;determining at least one difference between the first version of the audio recording and the second version of the audio recording using one or more of time domain analysis and frequency domain analysis; andstoring the at least one difference in a difference set, wherein the difference set allows the first version of the audio recording to be distinguished from the second version of the audio recording.2. The computer-implemented method of claim 1 , wherein determining at least one difference between the first version of the audio recording and the second version of the audio recording using time domain analysis comprises:partitioning the first version of the audio recording and the second version of the audio recording into non-overlapping blocks of fixed lengths;aligning the blocks for the second version of the audio recording with corresponding blocks for the first version of the audio recording to form block pairs;subtracting the block for the second version of the audio recording from ...

Подробнее
04-02-2021 дата публикации

COORDINATING AND MIXING AUDIOVISUAL CONTENT CAPTURED FROM GEOGRAPHICALLY DISTRIBUTED PERFORMERS

Номер: US20210037166A1
Принадлежит:

Audiovisual performances, including vocal music, are captured and coordinated with those of other users in ways that create compelling user experiences. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for visually prominent presentation performance synchronized video of one or more of the contributors. Prominence of particular performance synchronized video may be based, at least in part, on computationally-defined audio features extracted from (or computed over) captured vocal audio. Over the course of a coordinated audiovisual performance timeline, these computationally-defined audio features are selective for performance synchronized video of one or more of the contributing vocalists. 1. (canceled)2. A method of preparing a coordinated audiovisual performance from distributed performer contributions , the method comprising:receiving via a communication network, a first audiovisual encoding of a first performer, the first audiovisual encoding comprising at least first performer audio captured at the first remote device and first performer performance synchronized video captured by a camera of the first remote device;mixing the first performer audio with a first performer-selected backing track, wherein the mixing results in a first mixed audiovisual performance;receiving via the communication network a selection of the first mixed audiovisual performance;supplying via the communication network to a second remote device the first mixed audiovisual performance;receiving via the communication network, a second audiovisual encoding of a second performer, the second audiovisual encoding comprising at least second performer audio ...

Подробнее
16-02-2017 дата публикации

METHOD AND APPARATUS FOR RECOMMENDING MUSIC, AND BICYCLE

Номер: US20170043236A1
Автор: LI Dalong
Принадлежит:

Embodiments of the disclosure provide a method and apparatus for recommending music, and a bicycle, the method including: acquiring a cadence of a sporting user; determining a number of beats corresponding to the acquired cadence according to the cadence, and a preset correspondence relationship between the cadence and the number of beats; and recommending music agreeing with the number of beats for the sporting user according to the number of beats, so that the sporting user can be provided with the music agreeing with his or her sporting state so as to better satisfy the user and to improve the experience of the user. 1. A method for recommending music , the method comprising:at a bicycle,acquiring a cadence of a sporting user;determining a number of beats corresponding to the acquired cadence according to the cadence, and a preset correspondence relationship between the cadence and the number of beats; andrecommending music agreeing with the number of beats for the sporting user according to the number of beats.2. The method according to claim 1 , wherein acquiring the cadence of the sporting user comprises:acquiring the cadence of the sporting user through a cadence sensor.3. The method according to claim 1 , wherein acquiring the cadence of the sporting user comprises:acquiring a cadence of the sporting user in a preset length of time; anddetermining an average cadence of the sporting user according to the cadences of the sporting user in a number N of consecutive preset lengths of time, and determining the average cadence as the cadence of the user, wherein N represent a positive integer.4. The method according to claim 1 , wherein recommending the music agreeing with the number of beats for the sporting user according to the number of beats comprises:selecting music agreeing with the number of beats from pre-stored music according to the number of beats, and recommending the music to the sporting user;sending a music request carrying the determined number of ...

Подробнее
24-02-2022 дата публикации

System and Method For Reusable Digital Video Templates Incorporating Cumulative Sequential Iteration Technique In Music Education

Номер: US20220058978A1
Автор: Lewis John Eric
Принадлежит:

Disclosed is a method of presenting a media file enabling a user to emulate musical content therein. The media file comprises a plurality of segments, each segment representing a demonstration of at least part of a musical phrase comprising one or more musical notes of a piece of music. The method comprises presenting a first segment of the plurality of segments for emulation by the user. The method further comprises subsequently presenting the first segment followed by a second segment of the plurality of segments for emulation by the user. The method further comprises subsequently presenting the previously presented segments followed by additional segments until all of the plurality of segments have been presented for emulation by the user. 1. A method of presenting a media file enabling a user to emulate musical content therein , wherein the media file comprises a plurality of segments , each segment representing a demonstration of at least part of a musical phrase comprising one or more musical notes of a piece of music , the method comprising:presenting a first segment of the plurality of segments for emulation by the user;subsequently presenting the first segment followed by a second segment of the plurality of segments for emulation by the user;subsequently presenting the previously presented segments followed by additional segments until all of the plurality of segments have been presented for emulation by the user.2. The method of claim 1 , wherein the media file comprises an audio representation of the piece of music.3. The method of claim 2 , wherein the media file comprises a first video representation of the piece of music.4. The method of claim 3 , wherein the media file comprises a further video representation of the piece of music shown from a different angle than the first video representation.5. The method of claim 1 , wherein the media file comprises text and/or still images relating to performance of the piece of music.6. The method of claim 1 , ...

Подробнее
24-02-2022 дата публикации

Comparison Training for Music Generator

Номер: US20220059062A1
Принадлежит:

Techniques are disclosed relating to automatically generating new music content based on image representations of audio files. A music generation system includes a music generation subsystem and a music classification subsystem. The music generation subsystem may generate output music content according to music parameters that define policy for generating music. The classification subsystem may be used to classify whether music is generated by the music generation subsystem or is professionally produced music content. The music generation subsystem may implement an algorithm that is reinforced by prediction output from the music classification subsystem. Reinforcement may include tuning the music parameters to generate more human-like music content. 1. A method , comprising:generating, by a music generator on a computer system, output music content from a plurality of digital music fragments, wherein the output music content includes an array of different output music content generated from the plurality of digital music fragments, and wherein the music generator generates the array of different output music content in audio format by sequentially interpreting one or more music generation parameters following a set of interpretation rules;receiving, by the music generator, reinforcement input determined from the generated output music content;adjusting, using an algorithm, music generation parameters for the music generator based on the reinforcement input, wherein the music generation parameters include at least one of previously generated music generation parameters or user-created music generation parameters; andgenerating new output music content from the plurality of digital music fragments based on the adjusted music generation parameters.2. The method of claim 1 , further comprising classifying claim 1 , by a classifier on the computer system claim 1 , the output music content using a set of one or more trained classifiers to determine a prediction of a ...

Подробнее
24-02-2022 дата публикации

SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE

Номер: US20220060269A1
Принадлежит:

A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event. 1. A system for generating an audio output file comprising:{'claim-text': ['receive audio parameters that comprise a harmonic map and at least one of a tempo, a genre, or a mood;', {'claim-text': ['wherein each audio block comprises a portion of audio content from a respective audio track, and', 'wherein each audio track comprises prerecorded audio content; and'], '#text': 'automatically select a unique subset of audio blocks from a group of audio blocks based on the audio parameters and at least one unique identifier associated with a selected audio block,'}, 'generate the audio output file based on the subset of audio blocks.'], '#text': 'one or more processors configured to:'}2. The system of claim 1 , wherein the audio output file comprises a MIDI output file.3. The system of claim 1 , wherein the audio blocks are derived from one or more musical performances.4. The system of claim 1 , wherein the one or more processors are further configured to adapt at least one audio block for the output file.5. The system of claim 4 , wherein the at least one audio block adapted for the output file is adapted based on one or more of the audio parameters.6. The system of claim 1 , wherein the one or more processors are further configured to:receive ...

Подробнее
06-02-2020 дата публикации

AUTOMATIC ISOLATION OF MULTIPLE INSTRUMENTS FROM MUSICAL MIXTURES

Номер: US20200042879A1
Принадлежит:

A system, method and computer product for training a neural network system. The method comprises inputting an audio signal to the system to generate plural outputs f(X, Θ). The audio signal includes one or more of vocal content and/or musical instrument content, and each output f(X, Θ) corresponds to a respective one of the different content types. The method also comprises comparing individual outputs f(X, Θ) of the neural network system to corresponding target signals. For each compared output f(X, Θ), at least one parameter of the system is adjusted to reduce a result of the comparing performed for the output f(X, Θ), to train the system to estimate the different content types. In one example embodiment, the system comprises a U-Net architecture. After training, the system can estimate various different types of vocal and/or instrument components of an audio signal, depending on which type of component(s) the system is trained to estimate. 1. A method for estimating a component of a provided audio signal , comprising:converting the provided audio signal to an image;inputting the image to a U-Net trained to estimate different types of content, the different types of content including one or more of vocal content and musical instrument content, wherein, in response to the input image, the U-Net outputs signals, each representing a corresponding one of the different types of content; andconverting each of the signals output by the U-Net to an audio signal.2. The method of claim 1 , wherein the U-Net comprises:a convolution path for encoding the image; anda plurality of deconvolution paths for decoding the image encoded by the convolution path, each of the deconvolution paths corresponding to one of the different types of content.3. The method of claim 2 , further comprising applying an output of at least one of the deconvolution paths as a mask to the image.4. The method of claim 1 , wherein the musical instrument content includes different types of musical ...

Подробнее
06-02-2020 дата публикации

MULTIPLE AUDIO TRACK RECORDING AND PLAYBACK SYSTEM

Номер: US20200043453A1
Автор: LANG Thomas A., Li Yan
Принадлежит:

A multiple audio track recording and playback system having at least two audio inputs, a first audio input for receipt and recording of audio tracks AT representing a first audio stream, a second audio input for receipt of a second audio stream, the system is configured for playback of audio tracks recorded on the basis of the first audio stream and the playback is performed with reference to a tempo reference, the tempo reference is automatically derived from beats obtained through beat detection, and the system is configured for beat detection on the basis of at least the first audio stream and the second audio stream. 1. A multiple audio track recording and playback system (RPS) , comprising:a first audio input for receipt and recording of audio tracks AT representing a first audio stream; anda second audio input for receipt of a second audio stream;wherein the system is configured to playback audio tracks recorded on the basis of the first audio stream and wherein the playback is performed with reference to a tempo reference, andwherein the tempo reference is automatically derived from beats obtained through beat detection, and wherein the system is configured for beat detection on the basis of at least the first audio stream and the second audio stream.2. The multiple audio track recording and playback according to claim 1 , wherein the system is configured to playback audio tracks recorded on the basis of the first audio stream and wherein the playback is performed with reference to the tempo reference and location of the detected beats.3. The system according to claim 1 , wherein the system is configured to mark detected beats of at least one of the recorded audio tracks claim 1 , thereby establishing a beat reference related to the relevant audio track.4. The system according to claim 1 , wherein the playback performed with reference to a tempo reference involves that recorded audio tracks are synchronized to the first or the second audio stream by means of ...

Подробнее
06-02-2020 дата публикации

Singing voice separation with deep u-net convolutional networks

Номер: US20200043517A1
Принадлежит: SPOTIFY AB

A system, method and computer product for estimating a component of a provided audio signal. The method comprises converting the provided audio signal to an image, processing the image with a neural network trained to estimate one of vocal content and instrumental content, and storing a spectral mask output from the neural network as a result of the image being processed by the neural network. The neural network is a U-Net. The method also comprises providing the spectral mask to a client media playback device, which applies the spectral mask to a spectrogram of the provided audio signal, to provide a masked spectrogram. The media playback device also transforms the masked spectrogram to an audio signal, and plays back that audio signal via an output user interface.

Подробнее
18-02-2021 дата публикации

INTEGRAL VISUAL LEARNING SYSTEM FOR THE ELECTRIC GUITAR AND SIMILAR INSTRUMENTS

Номер: US20210049926A1
Принадлежит:

In summary, the integral visual learning system for electric guitar and similar instruments, to which the present invention refers, has a learning method that is integrated into an application for mobile devices and contains all the learning methodology and knowledge that It will be transmitted to the user to develop the ability to play the instrument. This learning method is connected to a display device on the instrument fretboard by means of a Bluetooth communication device and allows the user to visualize by means of the illumination of transparent resins, the information of notes, chords, musical scales, and other information sent from the learning method. The system also has an augmented reality device that is made up of augmented reality glasses that allow the user to visualize a virtual teacher who provides the user with theoretical information, as well as images superimposed with the position and shape of the fingers of your left/right hand to correctly play the notes, chords and musical scales. Likewise, the system has a feedback device that identifies the notes and/or chords that the user played on the instrument and compares them with the notes and/or chords requested in the exercises contained in the learning method, this allows the users identify the precision with which they performed the exercises and gradually improve until they obtain the required mastery at the different learning levels. Finally, the system has a recharge device for a lithium ion battery that powers the entire system. 1. An integral visual learning system for electric guitar and similar instruments , which incorporates elements that are currently in the state of the art such as; Bluetooth communication modules , microcontrollers , electronic components , light emitting diode matrix , augmented reality glasses and mobile devices , and characterized by:a) a learning method integrated in an application for mobile devices that contains the methodology and knowledge that will be ...

Подробнее
15-02-2018 дата публикации

DEVICE, SYSTEM AND METHOD FOR GENERATING AN ACCOMPANIMENT OF INPUT MUSIC DATA

Номер: US20180046709A1
Принадлежит: SONY CORPORATION

A device for automatically generating a real time accompaniment of input music data includes a music input that receives music data. A music analyzer analyzes received music data to obtain a music data description including one or more characteristics of the analyzed music data. A query generator generates a query to a music database including music patterns and associated metadata including one or more characteristics of the music patterns, the query being generated from the music data description and from an accompaniment description describing preferences of the real time accompaniment and/or music rules describing general rules of music. A query interface queries the music database using a generated query and receives a music pattern selected from the music database by use of the query. A music output outputs the received music pattern. 1a music input that receives music data,a music analyzer that analyzes received music data to obtain a music data description comprising one or more characteristics of the analyzed music data,a query generator that generates a query to a music database comprising music patterns and associated metadata comprising one or more characteristics of said music patterns, said query being generated from said music data description and from an accompaniment description describing preferences of said real time accompaniment and/or music rules describing general rules of music,a query interface that queries said music database using a generated query and that receives a music pattern selected from said music database by use of said query, anda music output that outputs said received music pattern.. A device for automatically generating a real time accompaniment of input music data, said device comprising: The present application is a continuation of U.S. application Ser. No. 14/401,311, filed Nov. 14, 2014, which is based on PCT/EP2013/061372, filed Jun. 3, 2013, and claims priority to European Patent Application 12 170 706.1 filed in the ...

Подробнее
03-03-2022 дата публикации

SYSTEM, METHOD AND MULTI-FEATURED COMPUTER PROGRAM PRODUCT FOR VIDEO CREATION

Номер: US20220062776A1
Автор: Sakib Shadman
Принадлежит:

A multi-functional computer implemented system for creation of videos with various effects is provided. The computer implemented system of present invention is comprising of: a guessing subsystem that allows user to create and share a video with guessing option for other users to play a guessing game while watching the video; a challenge subsystem to allow users to challenge other users for video battles; a tracking subsystem that allows user to select any specific segment of the video, determines music/audio effect present within that segment and visualizes that music effect in video too; and a gesture detection subsystem that is provided to detect the gesture of the user and perform action associated with detected gesture.

Подробнее
16-02-2017 дата публикации

ELECTRONIC DEVICE AND OPERATION METHOD THEREOF

Номер: US20170047082A1
Принадлежит:

Disclosed is an electronic device, which can acoustically or visually synchronize a plurality of independent beats and output the synchronized beats (or tempos) when executing a music application, including a user interface, a memory, and one or more processors electrically connected to the user interface and the memory, which display tempo progress information of music in response to playing of the music, detect an event while the music is played, synchronize the played music and tempo progress information of music according to the event, and output the synchronized music and tempo progress information. 1. An electronic device comprising:a user interface;a memory; andone or more processors electrically connected to the user interface and the memory, wherein the one or more processors:display tempo progress information of music in response to playing of the music,detect an event while the music is being played,synchronize the played music and tempo progress information of the music according to the event, andoutput the synchronized music and tempo progress information.2. The electronic device of claim 1 , wherein the user interface includes a basic control area for a general control of a music application claim 1 , a looper area including a plurality of cells on which various sound samples are set claim 1 , and a looper control area for controlling the plurality of cells claim 1 , andwherein the basic control area and the looper control area include a metronome object that outputs a metronome function of each segments of played music.3. The electronic device of claim 2 , wherein the played music and the music according to the event correspond to segments of music which have different attributes and independently operate based on different layers.4. The electronic device of claim 3 , wherein the music includes first music in which a performance or an effect by at least one virtual musical instrument is configured as one package claim 3 , and second music that repeats ...

Подробнее
15-02-2018 дата публикации

MUSIC PRACTICE FEEDBACK SYSTEM, METHOD, AND RECORDING MEDIUM

Номер: US20180047300A1
Принадлежит:

A music practice feedback system, comprising, a processor; and a memory storing instructions that cause the processor to perform, monitoring an outcome of a playing of a sheet music by a user, and based on the outcome, suggesting a type of improvement to increase a success rate of playing the sheet music. 1. A music practice feedback system , comprising:a processor, and monitoring an outcome of a playing of a sheet music by a user; and', 'based on the outcome, suggesting a type of improvement to increase a success rate of playing the sheet music., 'a memory storing instructions that cause the processor to perform2. The system of claim 1 , wherein the monitoring collects information to determine the outcome claim 1 , the information including at least one of:difficulties that a user of the cohort has with a region of the sheet music,difficulties that the cohort has with regions of the sheet music;a measure of playing improvement of the cohort;a measure of playing decline of the cohort;a region of the sheet music for a creativity injection; anda measure of an emotional response to a region of the sheet music.3. The system of claim 1 , further comprising calculating a difficulty of the playing of the sheet music for a particular region of the sheet music based on a distance of a user error from a correct playing of the sheet music to the actual playing of the sheet music.4. The system of claim 1 , further comprising:using a bidirectional feed from an electronic calendar to assess at least one of a fatigue of a user as a result of playing the sheet music and musical workout routines performed by the user; andshaping target curves for the user by scheduling future musical workouts on the electronic calendar.5. The system of claim 1 , further comprising calculating points indicating a success of the playing of the sheet music based on external inputs from social media.6. The system of claim 1 , further comprising classifying the user into a cohort based on a play style of ...

Подробнее
13-02-2020 дата публикации

ROLE SIMULATION METHOD AND TERMINAL APPARATUS IN VR SCENE

Номер: US20200047074A1
Автор: Zhang Hongyi
Принадлежит:

A role simulation method and a terminal apparatus in a VR scene are provided. The role simulation method in the VR scene includes: obtaining a role obtaining instruction triggered by a hand model in the VR scene, movement of the hand model in the VR scene being controlled by an interaction controller; determining a virtual role from the VR scene according to the role obtaining instruction; dynamically adjusting virtual props in the VR scene according to a music file played in the VR scene; displaying the virtual role in the VR scene through a VR display, and displaying the virtual props that are dynamically adjusted. 1. A role simulation method in a virtual reality (VR) scene performed at a terminal apparatus having one or more processors and memory storing programs to be executed by the one or more processors , the method comprising:obtaining a role obtaining instruction triggered by a hand model in the VR scene, wherein a movement of the hand model in the VR scene is controlled by an interaction controller;determining a virtual role from the VR scene according to the role obtaining instruction;dynamically adjusting virtual props in the VR scene according to a music file played in the VR scene; anddisplaying the virtual role in the VR scene through a VR display, and displaying the virtual props that are dynamically adjusted.2. The method according to claim 1 , wherein the obtaining a role obtaining instruction triggered by a hand model in the VR scene comprises:detecting a to-be-sung song that is selected by the hand model in the VR scene, and adding the to-be-sung song to a list of songs; anddetermining the role obtaining instruction according to a selection operation of the hand model.3. The method according to claim 2 , wherein the determining a virtual role from the VR scene according to the role obtaining instruction comprises:determining a singing order of the virtual role according to the list of songs; anddetermining that a current role state of the virtual ...

Подробнее
14-02-2019 дата публикации

SYSTEM AND METHOD FOR DETECTING OPERATING EVENTS OF AN ENGINE VIA MIDI

Номер: US20190049329A1
Принадлежит:

A method of monitoring an operating event of a combustion engine includes receiving a noise signal sensed by a knock sensor disposed in or proximate to the combustion engine, correlating the noise signal with a musical instrument digital interface (MIDI) fingerprint having at least an ADSR envelope indicative of the operating event, and detecting if the operating event has occurred based on the correlating of the noise signal with the fingerprint. 1. A method of monitoring an operating event of a combustion engine , comprising:receiving a noise signal sensed by a knock sensor disposed in or proximate to the combustion engine;correlating the noise signal with a musical instrument digital interface (MIDI) fingerprint having at least an ADSR envelope indicative of the operating event of the combustion engine; anddetecting if the operating event has occurred based on the correlating of the noise signal with the fingerprint.2. The method of claim 1 , wherein the operating event comprises an opening of an intake valve of the internal combustion engine claim 1 , a closing of the intake valve claim 1 , an opening of an exhaust valve of the internal combustion engine claim 1 , a closing of the exhaust valve claim 1 , or a peak firing pressure.3. The method of claim 1 , comprising baselining the combustion engine to derive the MIDI fingerprint having at least the ADSR envelope indicative of the operating event.4. The method of claim 3 , wherein baselining the combustion engine to derive the MIDI fingerprint having at least the ADSR envelope comprises:deriving the ADSR envelope from a baseline noise signal converted into a MIDI encoded baseline noise signal indicative of the operating event and plotting the ADSR envelope and operating event indicator data against time to derive a location of the ADSR envelope at which the operating event occurs.5. The method of claim 4 , comprising deriving the location of the ADSR envelope at which the operating event occurs by determining an ...

Подробнее
03-03-2022 дата публикации

METHOD AND DEVICE FOR DISPLAYING MUSIC SCORE IN TARGET MUSIC VIDEO

Номер: US20220068248A1
Автор: LIN Jingying, Zhang Yang
Принадлежит:

The present application provides techniques for displaying music score segments in target music videos. The techniques comprise determining a digital music score corresponding to a piece of music comprised in a target music video; determining a segment of the digital music score corresponding to a current playing progress of the target music video based at least in part on a playing progress of the target music video; generating an image of a music score segment corresponding to the segment of the digital music score based on a predetermined condition; and presenting the image on a corresponding interface of playing the target music video.

Подробнее
08-05-2014 дата публикации

Audio tracker apparatus

Номер: US20140129235A1
Принадлежит: Nokia Oyj

Apparatus comprising a receiver configured to receive a first audio signal, a signal characteriser configured to determine at least one characteristic associated with the first audio signal, a comparator configured to compare the at least one characteristic against at least one characteristic associated with at least one further audio signal, and a display configured to display the at least one characteristic associated with at least one further audio signal dependent on the first audio signal characteristic.

Подробнее