Настройки

Укажите год
-

Небесная энциклопедия

Космические корабли и станции, автоматические КА и методы их проектирования, бортовые комплексы управления, системы и средства жизнеобеспечения, особенности технологии производства ракетно-космических систем

Подробнее
-

Мониторинг СМИ

Мониторинг СМИ и социальных сетей. Сканирование интернета, новостных сайтов, специализированных контентных площадок на базе мессенджеров. Гибкие настройки фильтров и первоначальных источников.

Подробнее

Форма поиска

Поддерживает ввод нескольких поисковых фраз (по одной на строку). При поиске обеспечивает поддержку морфологии русского и английского языка
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Ведите корректный номера.
Укажите год
Укажите год

Применить Всего найдено 933. Отображено 106.
10-07-2012 дата публикации

Methods and apparatus for remotely waking up a computer system on a computer network

Номер: US0008219691B1
Автор: Michael Karl, KARL MICHAEL

In a method for waking up a computer, a data unit is received at a first port of a network switching device via a first network link. The first port of the network switching device is coupled to the first network link. The network switching device is used to determine whether the data unit includes data indicative of a wake-up event for the computer, and to change a state of a second network link to wake up the computer if the data unit includes data indicative of the wake-up event for the computer. The second network link is coupled to a second port of the network switching device.

Подробнее
30-05-2013 дата публикации

SUPPORT FITTING FOR HEIGHT-ADJUSTABLE SUPPORT OF A SUBSTANTIALLY HORIZONTALLY EXTENDING BEARING AND GUIDING TRACK, AND TRACK SYSTEM THEREWITH

Номер: US20130134269A1
Автор: Michael Karl
Принадлежит: PANTHER GMBH

A support fitting is used for height-adjustable support of a substantially horizontally extending bearing and guiding track, for a camera trolley (dolly), and has a fitting body with an upper face, for directly or indirectly underpinning the track; and on a lower face lying opposite the upper face, receiving and/or fastening mechanism for fixing at least one support element, so the support fitting can be supported at a certain height with respect to a base. The fitting body can have at a stop member between the upper and lower face for connecting components that substantially extend below the contact surface (e.g. girders or sleepers). A number of variants are disclosed including one having a height-adjustable support for a substantially horizontally extending bearing and guiding track for supporting a camera trolley. 1. A support fitting for height-adjustable support of a substantially horizontally extending bearing and guiding rail , in particular for a camera dolly , comprising a fitting body which ,in the region of a side defined as the upper side, has a bearing surface for directly or indirectly underpinning the rail; andwhich, on a side which is defined as the lower side and is opposite the upper side, has receiving and/or fastening means for fixing at least one support element, with which the support fitting can be supported at a certain height in relation to a base.2. A support fitting for height-adjustable support of a substantially horizontally extending bearing and guiding rail , in particular for a camera dolly , comprising a fitting body which ,on a side defined lower side, has receiving and/or fastening means for fixing at least one support element with which the support fitting can be supported at a certain height in relation to a base, wherein,for reinforcement purposes, the rail has a bearer, to which the support fitting is fastened or is fastenable outside a vertical plane of the rail.3. The support fitting as claimed in claim 1 , wherein the ...

Подробнее
10-01-2012 дата публикации

Methods and apparatus for remotely waking up a computer system on a computer network

Номер: US0008095667B1
Автор: Michael Karl, KARL MICHAEL

A method for waking up a computer. The method includes receiving a data unit via a first network link. The method further includes determining whether the received data unit includes data indicative of a wake-up event for the computer. The method further includes waking up the computer via a second network link if the data unit includes data indicative of a wake-up event for the computer.

Подробнее
03-08-2023 дата публикации

CONVERTER ARRANGEMENT AND METHOD OF OPERATION FOR SAID CONVERTER ARRANGEMENT

Номер: US20230246442A1
Принадлежит:

A device controls a power flow in an AC network and has a series converter with a DC side to connect to a DC link and an AC side to connect to the AC network via a series transformer. The device further has a bridging arrangement between the series transformer and the series converter configured to bridge the series converter. The bridging arrangement contains at least one bridging branch having a switching unit with antiparallel thyristors and a resistance in series with the switching unit. Furthermore, a method of operation operates the device.

Подробнее
01-02-2011 дата публикации

Switch failover for aggregated data communication links

Номер: US0007881185B1

A network device includes a first plurality of ports and a first plurality of communication links. Each port of the first plurality of ports communicates with a corresponding communication link of the first plurality of communication links. An adapter aggregates the first plurality communication links into a second plurality of aggregated links. The adaptor assigns a single media access control address to each aggregated link of the second plurality of aggregated links. A driver selects a first aggregated link of the second plurality of aggregated links as an active link based on a link quality of the first aggregated link. The driver sends and receives data over the first aggregated link using the single media access control address assigned to the first aggregated link. The driver selects a second aggregated link of the second plurality of aggregated links as the active link in response to the link quality of the first aggregated link being less than a link quality of the second aggregated ...

Подробнее
08-08-2002 дата публикации

Method of analyzing in real time the correspondence of image characteristics in corresponding video images

Номер: US20020106120A1
Принадлежит:

A hybrid method using both block recursion and pixel recursion of analyzing in real time the disparity of image characteristics in a pair of two-dimensional stereoscopic images to yield an improved three-dimensional appearance thereof. A scene is recorded by a stereoscopic camera system and its video images are subjected to a disparity analysis. For every pixel, the analysis detects the displacement between individual image characteristics and allows calculation of the motion parallax as it appears to a viewer. The is suitable for implementation with stereoscopic images of any kind and provides for basing the input image data for the block recursion on the left and on the right video image of a stereoscopic image pair and that the parameters of the stereo geometry are inclided in the pixel recursion for satisfying the epipolar condition. The method in accordance with the invention is used without additional aids with motion parallax in any spatial presentations.

Подробнее
05-05-2009 дата публикации

Switch failover for aggregated data communication links

Номер: US0007529180B1

A network device, method and computer program product for communicating data over aggregated links, wherein each of the aggregated links comprises a plurality of data communication links. The network device comprises n ports; and a processor to determine a link quality for each of m of the aggregated links, wherein m>=2, wherein each of the m aggregated links comprises a preselected plurality p of the n ports, select one of the m aggregated links based on the link quality determined for each of the m aggregated links, and send the data over the selected one of the m aggregated links.

Подробнее
07-06-2012 дата публикации

CONTROL SIGNAL MEMOIZATION IN A MULTIPLE INSTRUCTION ISSUE MICROPROCESSOR

Номер: US20120144166A1

A dynamic predictive and/or exact caching mechanism is provided in various stages of a microprocessor pipeline so that various control signals can be stored and memorized in the course of program execution. Exact control signal vector caching may be done. Whenever an issue group is formed following instruction decode, register renaming, and dependency checking, an encoded copy of the issue group information can be cached under the tag of the leading instruction. The resulting dependency cache or control vector cache can be accessed right at the beginning of the instruction issue logic stage of the microprocessor pipeline the next time the corresponding group of instructions come up for re-execution. Since the encoded issue group bit pattern may be accessed in a single cycle out of the cache, the resulting microprocessor pipeline with this embodiment can be seen as two parallel pipes, where the shorter pipe is followed if there is a dependency cache or control vector cache hit. 120-. (canceled)21. A microprocessor for multiple instruction issue in the microprocessor , the microprocessor comprising:an instruction buffer;instruction decode and issue logic;a dependency cache; anda plurality of functional units, identifies an instruction group to be issued to the plurality of functional units in the microprocessor;', 'determines whether a dependency cache entry exists for the instruction group in the dependency cache, wherein the dependency cache entry includes control signals for executing the instruction group in a pipe of the microprocessor;', 'uses the control signals in the dependency cache entry to control execution of the instruction group in the microprocessor in response to a dependency cache entry existing for the instruction group in the dependency cache; and', 'computes control signals for the instruction group to form computed control signals and stores the computed control signals in the dependency cache in association with the instruction group in response ...

Подробнее
16-08-2012 дата публикации

STATE RECOVERY AND LOCKSTEP EXECUTION RESTART IN A SYSTEM WITH MULTIPROCESSOR PAIRING

Номер: US20120210162A1

System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory “nest” (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus. Each selectively paired processor core is includes a transactional execution facility, wherein the system is configured to enable processor rollback to a previous state and reinitialize lockstep execution in order to recover from an incorrect execution when an incorrect execution has been detected by the selective pairing facility. 1. A multiprocessing computer system comprising:a transactional memory system including a memory storage device;at least two processor cores in communication with said transactional memory system;a pairing sub-system adapted to pair at least two of said at least two processor cores for fault tolerant operations of a current transaction in response to receipt of configuration information signals, said pairing sub-system providing a common signal path for forwarding identical input data signals to each said paired two processor cores for simultaneous pairwise processing thereat, said pairwise processing performing a lock-step execution of said transaction, said transactional memory storage device adapted to store error-free transaction state information used in configuring each paired core said pairing sub-system for said simultaneous pairwise processing;decision logic device, in said pairing sub-system, for receiving transaction output ...

Подробнее
16-08-2012 дата публикации

MULTIPROCESSOR SWITCH WITH SELECTIVE PAIRING

Номер: US20120210172A1

System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory “nest” (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus. 1. A multiprocessing computer system comprising:a memory system including a memory storage device;at least two processor cores in communication with said memory system;a pairing sub-system adapted to dynamically configure two of said at least two processor cores for independent parallel operation in response to receipt of first configuration information, said pairing sub-system providing at least two separate signal I/O paths between said memory system and each respective one of said at least two processor cores for said independent parallel operation, and,said pairing sub-system adapted to pair at least two of said at least two processor cores for fault tolerant operations in response to receipt of second configuration information, said pairing sub-system providing a common signal path for forwarding identical input data to each said paired two processor cores for simultaneous processing thereat; and,decision logic device, in said pairing sub-system, for receiving an output of each said paired two processor devices and comparing respective output results of each, said decision logic device generating error indication upon detection of non-matching output results.2. The multiprocessing system as in claim 1 , wherein said at least two separate signal I/O paths in ...

Подробнее
18-10-2012 дата публикации

IMPLEMENTING INSTRUCTION SET ARCHITECTURES WITH NON-CONTIGUOUS REGISTER FILE SPECIFIERS

Номер: US20120265967A1

There are provided methods and computer program products for implementing instruction set architectures with non-contiguous register file specifiers. A method for processing instruction code includes processing an instruction of an instruction set using a non-contiguous register specifier of a non-contiguous register specification. The instruction includes the non-contiguous register specifier. 1. A non-transitory computer program product for employing extended registers , wherein instructions include at least one register field having a register extension bit , the register field being non-contiguous with the register extension bit , the non-transitory computer program product comprising a tangible storage medium readable by a processor and storing instructions for execution by the computer for performing a method comprising:decoding an instruction for execution, by the processor, the instruction comprising an opcode, one or more register fields for indexing into a corresponding register, a plurality of register extension bits, and an extended opcode, each register extension bit for effectively concatenating as a high order bit to a register field of a corresponding location in the instruction to form an extended register field; for each of said one or more register fields, effectively concatenating, by the processor, a register extension bit as high order bit to a corresponding, non-contiguous register field to form an effectively contiguous extended register field; and', 'performing a function defined by the opcode fields of the instruction using operands of said one or more extended registers corresponding to said extended register fields., 'executing the instruction comprising2. The non-transitory computer program product according to claim 1 , wherein the plurality of register extension bits are located in a single field of the instruction claim 1 , wherein a first extension bit at a first location in the single field is an register extension bit for any ...

Подробнее
22-11-2012 дата публикации

METHODS FOR GENERATING CODE FOR AN ARCHITECTURE ENCODING AN EXTENDED REGISTER SPECIFICATION

Номер: US20120297171A1

There are provided methods and computer program products for generating code for an architecture encoding an extended register specification. A method for generating code for a fixed-width instruction set includes identifying a non-contiguous register specifier. The method further includes generating a fixed-width instruction word that includes the non-contiguous register specifier. 1. A method for generating code for a fixed-width instruction set , comprising:identifying a non-contiguous register specifier; andgenerating a fixed-width instruction word that includes the non-contiguous register specifier.2. The method of claim 1 , wherein the non-contiguous register specifier includes at least two sets of contiguous bits separated by at least one bit not part of the register specifier claim 1 , and the method further comprises encoding a single logical register specifier into at least two non-contiguous fields of the non-contiguous register specifier.3. The method of claim 2 , wherein the single logical register specifier is represented using a generic intrinsic that provides no indication of a partitioning of an operand specification into the non-contiguous register specifier.4. The method of claim 1 , wherein a first set of bits in the non-contiguous register specifier is specified directly by an instruction field in the fixed-width instruction claim 1 , and a second set of bits in the non-contiguous register specifier is specified directly by another instruction field in the fixed-width instruction.5. The method of claim 1 , wherein a first set of bits in the non-contiguous register specifier is specified directly by an instruction field in the fixed-width instruction claim 1 , a second set of bits in the non-contiguous register specifier is specified using a deep encoding claim 1 , and the method further comprises generating a set of n bits for inclusion as part of the non-contiguous register specifier from a set of m bits encoded in the fixed-width instruction ...

Подробнее
17-01-2013 дата публикации

HEARING AID WITH MAGNETOSTRICTIVE ELECTROACTIVE SENSOR

Номер: US20130016862A1
Принадлежит: Starkey Laboratories, Inc.

A hearing aid includes a magnetostrictive electroactive (ME) sensor that generates an electrical signal in response to a magnetic field or a mechanical pressure. In various embodiments, the ME sensor is used for cordless charging of a rechargeable battery in the hearing aid by generating an electrical signal in response to a magnetic field generated for power transfer, magnetic sound signal reception, and/or detection of user commands by sensing a magnetic field or a pressure applied to the hearing aid. 1. A hearing aid , comprising:a hearing aid circuit including a microphone, a receiver, and an audio processor coupled between the microphone and the receiver;a rechargeable battery coupled to the hearing aid circuit to power the hearing aid circuit;a magnetostrictive electroactive (ME) sensor configured to generate an power signal in response to a first magnetic field and generate a driving signal in response to a second magnetic field or a pressure; and a battery charging circuit coupled to the rechargeable battery and configured to charge the rechargeable battery using the power signal; and', 'a switch coupled to the hearing aid circuit and configured to control the hearing aid circuit using the driving signal., 'a sensor processing circuit coupled to the ME sensor, the sensor processing circuit including2. The hearing aid of claim 1 , comprising a housing encapsulating at least the hearing aid circuit and the sensor processing circuit.3. The hearing aid of claim 2 , wherein the ME sensor is encapsulated in the housing.4. The hearing aid of claim 2 , wherein the ME sensor is incorporated into the housing.5. The hearing aid of claim 4 , wherein the housing comprises a battery door claim 4 , and the ME sensor is incorporated into the battery door.6. The hearing aid of claim 1 , wherein the ME sensor comprises two magnetostrictive layers and a piezoelectric layer sandwiched between the two magnetostrictive layers.7. The hearing aid of claim 1 , wherein the ME sensor ...

Подробнее
07-03-2013 дата публикации

Low Noise Rail and Method of Manufacturing

Номер: US20130056544A1
Автор: Löffler Michael Karl
Принадлежит: 3M INNOVATIVE PROPERTIES COMPANY

There is provided a rail having a longitudinal direction defined by the rolling direction of a wheel of a rail vehicle along the rail, and a cross-sectional profile arranged normally to the longitudinal direction and having a vertical and a horizontal direction, the rail profile having a foot portion, a web portion and a head portion whereby the head portion has a lower head section fixedly connected to the web portion and an upper head section, the lower head section and the upper head section being separated from each other by a gap extending in the vertical and the horizontal direction, the gap comprising one or more elastomeric materials and the upper head section having a geometrical moment of inertia J in the vertical direction and an area A of the profile so that the product A×Jis less than 230 cm. 1. A rail having a longitudinal direction defined by a rolling direction of a wheel of a rail vehicle along the rail , and a cross-sectional profile arranged normally to a longitudinal direction and having a vertical and a horizontal direction , the rail cross-sectional profile having a foot portion , a web portion and a head portion wherein the head portion has a lower head section fixedly connected to the web portion and an upper head section , the lower head section and the upper head section being separated from each other by a gap extending in the vertical direction and the horizontal direction , the gap comprising one or more elastomeric materials and the upper head section having a geometrical moment of inertia J in the vertical direction and an area A of the profile so that the product A×Jis less than 230 cm.2. A rail having a longitudinal direction defined by a rolling direction of a wheel of a rail vehicle along the rail , and a cross-sectional profile arranged normally to a longitudinal direction and having a vertical and a horizontal direction , the rail cross-sectional profile having a foot portion , a web portion and a head portion wherein the head ...

Подробнее
28-03-2013 дата публикации

AMPHIBIOUS SUBMERSIBLE VEHICLE

Номер: US20130078876A1
Принадлежит:

A vehicle is provided that is amphibious to include submersible operations. The vehicle has wings configured to generate a sufficient dive force to oppose buoyancy of the vehicle, when desired, which are disposed on opposing sides of a central hull. The vehicle is configured to enable easy transition from land operations to water operations, to include water surface travel as well as submerged travel. 1. An amphibious submersible vehicle , comprising:a body having a central hull and a pair of wings, the central hull having a designed waterline and a watertight cabin, the pair of wings disposed on opposing sides of the central hull below the designed waterline and having an inverted wing profile configured to generate a downward dive force sufficient to overcome buoyancy forces of the vehicle once sufficient speed is achieved;a plurality of wheels coupled to a lower portion of the body positioned to contact the ground in land operations; anda water propulsion system coupled to the body and configured to propel the body in a forward direction in water operations.2. A vehicle as defined in claim 1 , wherein the plurality of wheels includes a nose wheel coupled to a nose section of the central hull claim 1 , a left wheel coupled to a left wing of the pair of wings claim 1 , and a right wheel coupled to a right wing.3. A vehicle as defined in claim 2 , wherein the nose wheel is operatively coupled to a power assembly claim 2 , and the left wheel and the right wheel are configured for free wheel rotation.4. A vehicle as defined in claim 1 , wherein the pair of wings include adjustable control surfaces.5. A vehicle as defined in claim 4 , wherein the control surfaces are configured as elevons positioned aft of a center of gravity of the vehicle.6. A vehicle as defined in claim 1 , wherein the water propulsion system includes a jet pump or propeller.7. A vehicle as defined in claim 1 , further comprising electrical motors disposed in a first compartment of the body and ...

Подробнее
01-08-2013 дата публикации

HEARING AID WITH INTEGRATED FLEXIBLE DISPLAY AND TOUCH SENSOR

Номер: US20130195298A1
Автор: Sacha Michael Karl
Принадлежит: Starkey Laboratories, Inc.

A user interface incorporated onto a hearing aid includes flexible hybrid component integrating a touch sensor into a bendable display. The touch sensor, such as a capacitive sensor, includes one or more sensor elements allowing a user to control operation of the hearing aid by touching. The bendable display presents information related to the operation of the hearing aid to the user. 1. A hearing aid , comprising:a hearing aid circuit including a microphone, a receiver, and a processing circuit coupled between the microphone and the receiver;a hearing aid housing containing the hearing aid circuit; and a display layer configured to dynamically display information indicative of operation of the hearing aid; and', 'a sensor layer on the display layer, the sensor layer including a capacitive sensor configured to sense touching., 'a user interface coupled to the processing circuit and including a bendable display including2. The hearing aid of claim 1 , wherein the bendable display further comprises a cover layer on the sensor layer for protection of the sensor layer and the display layer.3. The hearing aid of claim 1 , wherein the hearing aid housing comprises a top surface and a plurality of side surfaces claim 1 , and the bendable display comprises a top display encompassing a substantial portion of the top surface.4. The hearing aid of claim 3 , wherein the hearing aid housing is an in-the ear (ITE) housing configured for an ITE hearing aid claim 3 , and the top surface is a surface facing outward when the ITE hearing aid is positioned during use.5. The hearing aid of claim 3 , wherein the hearing aid housing is a behind-the ear (BTE) housing configured for an BTE hearing aid claim 3 , and the top surface is a surface facing upward when the BTE hearing aid is positioned during use.6. The hearing aid of claim 5 , wherein the bendable display further comprises one or more side displays each incorporated into a side surface of the plurality of side surfaces.7. The ...

Подробнее
22-08-2013 дата публикации

SWITCHING STUCTURES FOR HEARING AID

Номер: US20130216075A1
Принадлежит: Starkey Laboratories, Inc.

An apparatus is provided that includes an input system, an output system, and a sensor for sensing magnetic fields. In one example, a signal processing circuit electrically connects the input system to the output system, and a magnetic sensor adapted to inhibit the acoustic input and function as a magnetic input in the presence of a magnetic field. In one example, the magnetic sensor includes a giant magneto resistive (GMR) sensor. In another example, the magnetic sensor includes an anisotropic magneto resistive (AMR) sensor. The magnetic field can be generated by, among other things, a magnet in a telephone handset. The hearing aid further is programmed based on time-varying characteristics of the magnetic field. Wireless activation or deactivation of the hearing aid is also described. Other examples and options are provided herein. 1an input system including a GMR sensor adapted to detect information modulated on a time varying magnetic field;an acoustic output system; anda signal processing circuit adapted to process signals from the input system and to present processed signals to the output system.. An apparatus, comprising: The present application is a is a continuation application of U.S. application Ser. No. 11/037,549, filed Jan. 16, 2005 which is a continuation-in-part (CIP) of U.S. application Ser. No. 10/244,295, filed Sep. 16, 2002, both of which are hereby incorporated by reference in their entirety.The present application is generally related to U.S. application Ser. No. 09/659,214, filed Sep. 11, 2000, and titled AUTOMATIC SWITCH FOR HEARING AID, which is hereby incorporated by reference.The present application is generally related to U.S. application Ser. No. 10/243,412, filed Sep. 12, 2002, and titled DUAL EAR TELECOIL SYSTEM, which is hereby incorporated by reference.This invention relates generally to hearing aids, and more particularly to switching structures and systems for a hearing aid.Hearing aids can provide adjustable operational modes or ...

Подробнее
28-11-2013 дата публикации

SWITCHING STRUCTURES FOR HEARING AID

Номер: US20130315423A1
Принадлежит: Starkey Laboratories, Inc.

A hearing aid is provided with a switch that automatically, non-manually switches at least one of inputs, filters, or programmable parameters in the presence of a magnetic field. 1. A hearing aid , comprising:an input system;an output system;a solid state tunneling magnetic sensor generating a magnetic field signal;a processor configured to be programmed to process signals from the input system and provide the processed signals to the output system,wherein the processor is configured to receive the magnetic field signal from the sensor, and is programmable to select parameters for signal processing using a first digital filter or a second digital filter, the selection of either the first digital filter or the second digital filter based at least in part on the magnetic field signal.2. The hearing aid of claim 1 , wherein the solid state tunneling magnetic sensor includes a spin dependent tunneling (SDT) device.3. The hearing aid of claim 2 , wherein the SDT device is fabricated using photolithography.4. The hearing aid of claim 2 , wherein the SDT device includes a saturation field range from 0.1 to 10 kA/m.5. The hearing aid of claim 2 , wherein the SDT device is configured to be used as a hearing aid switch.6. The hearing aid of claim 2 , wherein the SDT device is configured to provide hearing aid programming signals.7. The hearing aid of claim 2 , wherein the SDT device includes a giant magnetoresistivity (GMR) material layer claim 2 , and wherein the SDT device includes a conduction path perpendicular to a plane of the GMR material layer.8. The hearing aid of claim 1 , wherein the input system includes a microphone.9. The hearing aid of claim 1 , wherein the input system is configured to switch from an acoustic input to a magnetic input based on the magnetic field signal.10. The hearing aid of claim 9 , wherein the magnetic input includes a telecoil.11. A hearing aid claim 9 , comprising:a power source;a hearing aid circuit; anda solid state tunneling magnetic ...

Подробнее
12-12-2013 дата публикации

HEARING AID WITH DISTRIBUTED PROCESSING IN EAR PIECE

Номер: US20130329926A1
Автор: Sacha Michael Karl
Принадлежит: Starkey Laboratories, Inc.

Disclosed herein, among other things, are methods and apparatus for hearing assistance devices, and in particular to behind the ear and receiver in canal hearing aids with distributed processing. One aspect of the present subject matter relates to a hearing assistance device including hearing assistance electronics in a housing configured to be worn above or behind an ear of a wearer. The hearing assistance device includes an ear piece configured to be worn in the ear of the wearer and a processing component at the ear piece configured to perform functions in the ear piece and to communicate with the hearing assistance electronics, in various embodiments. 1. A hearing assistance device , comprising:hearing assistance electronics in a housing configured to be worn above or behind an ear of a wearer;an ear piece configured to be worn in the ear of the wearer; anda processing component at the ear piece configured to perform functions in the ear piece and to communicate with the hearing assistance electronics using a wired connection.2. The device of claim 1 , wherein the processing component includes a microcontroller.3. The device of claim 1 , wherein the processing component includes a microprocessor.4. The device of claim 1 , wherein the processing component includes a digital signal processor (DSP).5. The device of claim 1 , wherein the processing component includes a custom chip design.6. The device of claim 1 , wherein the processing component includes combinational logic.7. The device of claim 1 , wherein the processing component is configured to communicate with the hearing assistance electronics using a single wire.8. The device of claim 1 , wherein the ear piece includes a receiver configured to convert an electrical signal from the hearing assistance electronics to an acoustic signal.9. The device of claim 1 , wherein the ear piece includes a giant magnetoresistive (GMR) sensor.10. The device of claim 9 , wherein the processing component is configured to ...

Подробнее
16-01-2014 дата публикации

Method and apparatus for a hearing assistance device with pinna control

Номер: US20140016810A1
Принадлежит: Starkey Laboratories Inc

One embodiment of the present subject matter provides an apparatus for disposition between a pinna and a head of a user, the apparatus including a behind-the-ear housing, the housing having a first lateral side located adjacent the user's ear and a second lateral side located adjacent the side of the user's head when the apparatus is worn as directed, hearing assistance electronics disposed in the behind-the-ear housing, and a control disposed on at least one lateral side of the behind-the-ear housing, the control coupled to the hearing assistance electronics.

Подробнее
23-01-2014 дата публикации

HEARING ASSISTANCE DEVICE WITH WIRELESS COMMUNICATION FOR ON- AND OFF- BODY ACCESSORIES

Номер: US20140023216A1
Принадлежит: Starkey Laboratories, Inc.

The present disclosure related to hearing assistance devices such as hearing aids, and in particular to a hearing assistance device with wireless communication for on- and off-body accessories. In various embodiments, a hearing aid includes a housing, hearing assistance electronics within the housing, and conductive material incorporated into a portion of the housing. According to various embodiments, the conductive material includes a transmission line portion configured to conduct a signal from the hearing assistance electronics to an antenna portion of the conductive material on an exterior surface of the housing. In various applications, the conductive material includes a first antenna configured for short range communication with other body worn devices or accessories. The hearing aid also includes a second antenna within the housing, the second antenna configured for conducting RF radiation for long range communication, in various embodiments. 1. A hearing aid for a wearer having a hearing impairment , the hearing aid adapted to perform wireless communications , comprising:a housing;hearing assistance electronics within the housing; andconductive material incorporated into a portion of the housing, the conductive material including a transmission line portion configured to conduct a signal from the hearing assistance electronics to an antenna portion of the conductive material on an exterior surface of the housing.2. The hearing aid of claim 1 , wherein the conductive material is formed by a stereolithographic process.3. The hearing aid of claim 1 , wherein the conductive material includes a conductive elastomer.4. The hearing aid of claim 1 , wherein the conductive material includes a liquid metal.5. The hearing aid of claim 1 , wherein the conductive material is deposited on a surface of the housing using a spray nozzle.6. The hearing aid of claim 5 , wherein the spray nozzle is configured to be computer controlled to adjust impedance claim 5 , conductive ...

Подробнее
06-03-2014 дата публикации

IMMEDIATE RELEASE PHARMACEUTICAL FORMULATION OF 4-[3-(4-CYCLOPROPANECARBONYL-PIPERAZINE-1-CARBONYL)-4-FLUORO-BENZYL]-2H-PHTHALAZIN-1-ONE

Номер: US20140066447A1
Принадлежит: Astrazeneca UK, Ltd.

The present invention relates to a pharmaceutical formulation comprising the drug 4-[3-(4-cyclopropanecarbonyl-piperazine-1-carbonyl)-4-fluoro-benzyl]-2H-phthalazin-1-one in a solid dispersion with a matrix polymer that exhibits low hygroscopicity and high softening temperature, such as copovidone. The invention also relates to a daily pharmaceutical dose of the drug provided by such a formulation. In addition, the invention relates to the use of a matrix polymer that exhibits low hygroscopicity and high softening temperature in solid dispersion with 4-[3-(4-cyclopropanecarbonyl-piperazine-1-carbonyl)-4-fluoro-benzyl]-2H-phthalazin-1-one for increasing the bioavailability of the drug. 1. A pharmaceutical formulation comprising and active agent in solid dispersion with a matrix polymer , wherein the active agent is 4-[3-(4-cyclopropanecarbonyl-piperazine-1-carbonyl)-4-fluoro-benzyl]-2H-phthalazin-1-one or a salt or solvate thereof , and the matrix polymer exhibits low hygroscopicity and high softening temperature.2. The formulation as claimed in claim 1 , wherein the active agent is in stable amorphous form.3. The formulation as claimed in claim 2 , wherein at least 90% of the active agent is in amorphous form.4. The formulation as claimed in any one of to claim 2 , wherein the matrix polymer is selected from: copovidone claim 2 , hydroxypropyl methylcellulose phthalate (HPMCP) claim 2 , hydroxypropyl methylcellulose acetate succinate (HPMCAS) claim 2 , 2-hydroxypropyl-β-cyclodextrin (HPBCD) claim 2 , hydroxypropyl methylcellulose (Hypromellose claim 2 , HPMC) claim 2 , polymethacrylates claim 2 , hydroxypropyl cellulose (HPC) claim 2 , and cellulose acetate phthalate (CAP).5. The formulation as claimed in any one of to claim 2 , wherein the matrix polymer is copovidone.6. The formulation as claimed in claim 5 , wherein the copovidone is a co-polymer of 1-vinyl-2-pyrollidone and vinyl acetate in a ratio of 6:4 by mass.7. The formulation as claimed in claim 6 , ...

Подробнее
10-04-2014 дата публикации

POWER SYSTEM UTILIZING PROCESSOR CORE PERFORMANCE STATE CONTROL

Номер: US20140100706A1
Принадлежит: DELL PRODUCTS L.P.

An information handling system includes a power supply coupled to a processor that includes a plurality of cores. A power system controller is coupled to the power supply and the processor. The power system controller may set each of the plurality of cores to a performance state that is below a highest performance state. The power system controller may then determine whether the power supplied from the power supply to the processor during operation is sufficient to operate each of the plurality of cores at the highest performance state. In response to the power being insufficient to operate each of the plurality of cores at the highest performance state, the power system controller may control the plurality of cores such that a subset operate at the highest performance state and the remainder operate at a performance state that is lower than the highest performance state. 1. A power system , comprisinga powered system including a plurality of subsystems that are each operable in a plurality of performance states; set each of the plurality of subsystems in the powered system to a performance state that is below a highest performance state;', 'determine whether the power supplied from the power supply to the powered system during operation of the powered system is sufficient to operate each of the subsystems at the highest performance state; and', 'in response to the power supplied to the powered system being insufficient to operate each of the plurality of subsystems at the highest performance state, control the plurality of subsystems such that a subset of the plurality of subsystems operate at the highest performance state and the remainder of the plurality of subsystems operate at a performance state that is lower than the highest performance state., 'a power system controller coupled to the powered system and operable to couple to a power supply, wherein the power controller is operable to2. The power system of claim 1 , wherein the power system controller is ...

Подробнее
05-01-2017 дата публикации

Multi-Section Garbage Collection

Номер: US20170004072A1

The embodiments relate to a method for managing a garbage collection process. The method includes executing a garbage collection process on a memory block of user address space. A load instruction is run. Running the load instruction includes loading content of a storage location into a processor. The loaded content corresponds to a memory address. It is determined if the garbage collection process is being executed at the memory address. The load instruction is diverted to a process to move an object at the memory address to a location outside of the memory block in response to determining that the garbage collection process is being executed at the first memory address. The load instruction is continued in response to determining that the garbage collection process is not being executed at the memory address. 1. A computer program product for facilitating garbage collection within a computing environment , the computer program product comprising: obtaining processing control by a handler executing within a processor of the computer environment, the obtaining processing control being based on execution of a load instruction and a determination that an object pointer to be loaded indicates a location within a selected portion of memory undergoing garbage collection;', 'based on obtaining processing control by the handler, obtaining by the handler an image of the instruction and calculating a pointer address from the image, the address specifying a location of the object pointer;', 'based on obtaining the address of the object pointer, reading, by the handler, the object pointer, the object pointer indicating a location of an object pointed to by the object pointer;', 'determining by the handler whether the object pointer is to be modified;', 'modifying by the handler, based on determining the object pointer is to be modified, the object pointer to provide a modified object pointer; and', 'storing, based on modifying the object pointer, the modified object pointer in ...

Подробнее
05-01-2017 дата публикации

Multi-Section Garbage Collection

Номер: US20170004075A1
Принадлежит: International Business Machines Corp

The embodiments relate to a method for managing a garbage collection process. The method includes executing a garbage collection process on a memory block of user address space. A load instruction is run. Running the load instruction includes loading content of a storage location into a processor. The loaded content corresponds to a memory address. It is determined if the garbage collection process is being executed at the memory address. The load instruction is diverted to a process to move an object at the memory address to a location outside of the memory block in response to determining that the garbage collection process is being executed at the first memory address. The load instruction is continued in response to determining that the garbage collection process is not being executed at the memory address.

Подробнее
07-01-2016 дата публикации

SOFTWARE INDICATIONS AND HINTS FOR COALESCING MEMORY TRANSACTIONS

Номер: US20160004461A1
Принадлежит:

A transactional memory system that utilizes indications for the coalescing of outermost memory transactions, the coalescing causing committing of memory store data to memory for a first transaction to be done at transaction execution (TX) end of a second transaction a processor of the transactional memory system executes one or more coalescing instructions for controlling coalescing of a plurality of outermost transactions. Based on the execution of the one or more coalescing instructions, the processor determines whether two outermost transactions are to be coalesced. Based on determining that two outermost transactions are to be coalesced, the processor coalesces at least two outermost transactions included in the plurality of outermost transactions. 1. A method of utilizing indications for coalescing of outermost memory transactions , the coalescing causing committing of memory store data to memory for a first transaction to be done at transaction execution (TX) end of a second transaction , the method comprising:executing, by a processor, one or more coalescing instructions for controlling coalescing of a plurality of outermost transactions;based on the execution of the one or more coalescing instructions, determining, by the processor, whether two outermost transactions are to be coalesced; andbased on determining two outermost transactions are to be coalesced, coalescing, by the processor, at least two outermost transactions included in the plurality of outermost transactions.2. The method of claim 1 , wherein the one or more coalescing instructions claim 1 , when executed claim 1 , indicate which outermost transactions can be coalesced.3. The method of claim 1 , wherein the one or more coalescing instructions include one or both of a coalescing prefix that is associated with a transaction begin instruction of an outermost transaction and a coalescing argument associated with a transaction begin instruction of an outermost transaction.4. The method of claim 1 ...

Подробнее
07-01-2016 дата публикации

CODE OPTIMIZATION TO ENABLE AND DISABLE COALESCING OF MEMORY TRANSACTIONS

Номер: US20160004462A1
Принадлежит:

A transactional memory system controls the coalescing of outermost memory transactions. The coalescing causing committing of memory store data to memory for a first transaction to be done at transaction execution (TX) end of a second transaction. A processor of the transactional memory system executes a run-time instrumentation program for monitoring and modifying an associated program having a plurality of transactions. The processor initiates execution of the associated program. Based on execution of transactions, by the processor, of the associated program, the run-time instrumentation program dynamically obtains instrumentation information associated with the execution. Based on the obtained instrumentation information, the processor dynamically modifies continued execution of transactions of the associated program to optimize transactional execution (TX). 1. A method of controlling a coalescing of outermost memory transactions , the coalescing causing committing of memory store data to memory for a first transaction to be done at transaction execution (TX) end of a second transaction , the method comprising:executing, by a processor, a run-time instrumentation program for monitoring and modifying an associated program having a plurality of transactions;initiating, by the processor, execution of the associated program;based on execution of transactions, by the processor, of the associated program, the run-time instrumentation program dynamically obtaining instrumentation information associated with the execution; andbased on the obtained instrumentation information, dynamically modifying, by the processor, continued execution of transactions of the associated program to optimize transactional execution (TX).2. The method of claim 1 , the method further comprising:determining by the instrumentation program, that a first outermost transaction and a second outermost transaction of the plurality of transactions of the associated program should be coalesced; ...

Подробнее
07-01-2016 дата публикации

COMMITTING HARDWARE TRANSACTIONS THAT ARE ABOUT TO RUN OUT OF RESOURCE

Номер: US20160004537A1
Принадлежит:

A transactional memory system determines whether a hardware transaction can be salvaged. A processor of the transactional memory system begins execution of a transaction in a transactional memory environment. Based on detection that an amount of available resource for transactional execution is below a predetermined threshold level, the processor determines whether the transaction can be salvaged. Based on determining that the transaction can not be salvaged, the processor aborts the transaction. Based on determining the transaction can be salvaged, the processor performs a salvage operation, wherein the salvage operation comprises one or more of: determining that the transaction can be brought to a stable state without exceeding the amount of available resource for transactional execution, and bringing the transaction to a stable state; and determining that a resource can be made available, and making the resource available. 1. A method for determining whether a hardware transaction can be salvaged , the method comprising:beginning execution, by a processor, of a transaction in a transactional memory environment;based on detection that an amount of available resource for transactional execution is below a predetermined threshold level, determining, by the processor, whether the transaction can be salvaged;based on determining that the transaction can not be salvaged, aborting, by the processor, the transaction; and a) determining, by the processor, that the transaction can be brought to a stable state without exceeding the amount of available resource for transactional execution, and bringing the transaction to a stable state; and', 'b) determining, by the processor, that a resource can be made available, and making the resource available., 'based on determining the transaction can be salvaged, performing, by the processor, a salvage operation, wherein the salvage operation comprises one or more of2. The method of claim 1 , further comprising claim 1 , based on ...

Подробнее
07-01-2016 дата публикации

ALERTING HARDWARE TRANSACTIONS THAT ARE ABOUT TO RUN OUT OF SPACE

Номер: US20160004558A1
Принадлежит:

A transactional memory system determines whether to pass control of a transaction to an about-to-run-out-of-resource handler. A processor of the transactional memory system determines information about an about-to-run-out-of-resource handler for transaction execution of a code region of a hardware transaction. The processor dynamically monitors an amount of available resource for the currently running code region of the hardware transaction. The processor detects that the amount of available resource for transactional execution of the hardware transaction is below a predetermined threshold level. The processor, based on the detecting, saves speculative state information of the hardware transaction, and executes the about-to-run-out-of-resource handler, the about-to-run-out-of-resource handler determining whether the hardware transaction is to be aborted or salvaged. 1. A method for determining whether to pass control of a transaction , executing in a transactional memory environment , to an about-to-run-out-of-resource handler , the method comprising:determining, by a processor, information about an about-to-run-out-of-resource handler for transaction execution of a code region of a hardware transaction;dynamically monitoring, by the processor, an amount of available resource for the currently running code region of the hardware transaction;detecting, by the processor, that the amount of available resource for transactional execution of the hardware transaction is below a predetermined threshold level;based on detecting the amount of available resource is below the predetermined threshold level, saving, by the processor, speculative state information of the hardware transaction; andbased on detecting the amount of available resource is below the predetermined threshold level, executing, by the processor, the about-to-run-out-of-resource handler, wherein the about-to-run-out-of-resource handler determines whether the hardware transaction is to be aborted or salvaged. ...

Подробнее
07-01-2016 дата публикации

SOFTWARE ENABLED AND DISABLED COALESCING OF MEMORY TRANSACTIONS

Номер: US20160004559A1
Принадлежит:

A program controls coalescing of outermost memory transactions, the coalescing causing committing of memory store data to memory for a first transaction to be done at transaction execution (TX) end of a second transaction. wherein optimized machine instructions are generated based on an intermediate representation of a program, wherein either two atomic tasks are merged into a single coalesced transaction or are executed as separate transactions. 1. A method of controlling a coalescing of outermost memory transactions , the coalescing causing committing of memory store data to memory for a first transaction to be done at transaction execution (TX) end of a second transaction , the method for generating optimized machine instructions based on an intermediate representation of a program , the method comprising:based on the intermediate representation, generating, by a processor, a first module of optimized non-transactional machine instructions;based on the intermediate representation identifying two atomic tasks to be performed, the two atomic tasks consisting of a first atomic task and a second atomic task, determining whether the two atomic tasks are to be coalesced into a single atomic task;based on determining that the two atomic tasks are not to be coalesced, generating for the first atomic task, a first transaction of transactional machine instructions to be executed and generating for the second atomic task, a second transaction of transactional machine instructions to be executed; andbased on determining that the two atomic tasks are to be coalesced, generating for the two atomic tasks, a single coalesced transaction of transactional machine instructions to be executed.2. The method according to claim 1 , wherein the method for generating optimized machine instructions is performed by a just-in-time (JIT) compiler.3. The method according to claim 1 , further comprising:executing a run-time instrumentation program on a previous instance of a previous ...

Подробнее
07-01-2016 дата публикации

SALVAGING HARDWARE TRANSACTIONS WITH INSTRUCTIONS

Номер: US20160004573A1
Принадлежит:

A transactional memory system salvages a hardware transaction. A processor of the transactional memory system records information about an about-to-fail handler for transactional execution of a code region, and records information about a lock elided to begin transactional execution of the code region. The processor detects a pending point of failure in the code region during the transactional execution, and based on the detecting, stops transactional execution at a first instruction in the code region and executes the about-to-fail handler using the information about the about-to-fail handler. The processor, executing the about-to-fail handler, acquires the lock using the information about the lock, commits speculative state of the stopped transactional execution, and starts non-transactional execution at a second instruction following the first instruction in the code region. 1. A method for salvaging a hardware transaction , the method comprising:recording, by a processor, information about an about-to-fail handler for transactional execution of a code region;recording, by a processor, information about a lock elided to begin transactional execution of the code region;detecting, by the processor, a pending point of failure in the code region during the transactional execution;based on the detecting, stopping, by the processor, transactional execution after completing an execution of a first instruction at an instruction address in the code region;based on the detecting, executing, by the processor, the about-to-fail handler in the transactional code region using the information about the about-to-fail handler;acquiring the lock in the transaction, by the processor executing the about-to-fail handler in the transaction, using the information about the lock;committing, by the processor, speculative state of the stopped transactional execution; andstarting, by the processor, non-transactional execution at a second instruction in the code region, the second ...

Подробнее
07-01-2016 дата публикации

SALVAGING HARDWARE TRANSACTIONS WITH INSTRUCTIONS

Номер: US20160004590A1
Принадлежит:

A transactional memory system salvages a hardware transaction. A processor of the transactional memory system executes a first salvage checkpoint instruction in a code region during transactional execution of the code region, and based on the executing the first salvage checkpoint instruction, the processor records transaction state information comprising an address of the first salvage checkpoint instruction within the code region. The processor detects a pending point of failure in the code region during the transactional execution, and based on the detecting, determines that the transaction state information been recorded, and further based on the detecting, executes an about-to-fail handler. Based on executing the about-to-fail handler, the processor returns to the execution of the code region of the transaction at the address of the checkpoint instruction. 1. A method for salvaging a hardware transaction , the method comprising:executing, by a processor, a first salvage checkpoint instruction in a code region during transactional execution of the code region of a hardware transaction;based on the executing the first salvage checkpoint instruction, recording, by the processor, transaction state information comprising an address of the first salvage checkpoint instruction within the code region;detecting, by the processor, a pending point of failure in the code region during the transactional execution;based on the detecting, determining, by the processor, that transaction state information has been recorded; andbased on the detecting, transferring control to an about-to-fail handler; andbased on executing the about-to-fail handler, returning to the execution of the code region of the transaction at the address of the first salvage checkpoint instruction.2. The method of claim 1 , wherein the detecting claim 1 , by the processor claim 1 , the pending point of failure in the code region during the transactional execution includes evaluating a next instruction to ...

Подробнее
07-01-2016 дата публикации

Salvaging lock elision transactions

Номер: US20160004640A1
Принадлежит: International Business Machines Corp

A transactional memory system salvages a hardware lock elision (HLE) transaction. A processor of the transactional memory system executes a lock-acquire instruction in an HLE environment and records information about a lock elided to begin HLE transactional execution of a code region. The processor detects a pending point of failure in the code region during the HLE transactional execution. The processor stops HLE transactional execution at the point of failure in the code region. The processor acquires the lock using the information, and based on acquiring the lock, commits the speculative state of the stopped HLE transactional execution. The processor starts non-transactional execution at the point of failure in the code region.

Подробнее
07-01-2016 дата публикации

SALVAGING LOCK ELISION TRANSACTIONS

Номер: US20160004641A1
Принадлежит:

A transactional memory system salvages hardware lock elision (HLE) transactions. A computer system of the transactional memory system records information about locks elided to begin HLE transactional execution of first and second transactional code regions. The computer system detects a pending cache line conflict of a cache line, and based on the detecting stops execution of the first code region of the first transaction and the second code region of the second transaction. The computer system determines that the first lock and the second lock are different locks and uses the recorded information about locks elided to acquire the first lock of the first transaction and the second lock of the second transaction. The computer system commits speculative state of the first transaction and the second transaction and the computer system continues execution of the first code region and the second code region non-transactionally. 1. A method for salvaging hardware lock elision (HLE) transactions , the method comprising:recording, by a computer system, information about locks elided to begin HLE transactional execution of a first transactional code region of a first transaction having an elided first lock;recording information about locks elided to begin HLE transactional execution of a second transactional code region of a second transaction having an elided second lock, the first transaction and the second transaction executing concurrently;detecting a pending cache line conflict of a cache line monitored associated with the first transaction;based on the detecting, stopping execution of the first transactional code region of the first transaction and the second transactional code region of the second transaction;determining that the first lock and the second lock are different locks;based on the determining, using the recorded information about locks elided to acquire the first lock of the first transaction and the second lock of the second transaction;based on the ...

Подробнее
07-01-2016 дата публикации

Detecting cache conflicts by utilizing logical address comparisons in a transactional memory

Номер: US20160004643A1
Принадлежит: International Business Machines Corp

A processor in a multi-processor configuration is configured perform dynamic address translation from logical addresses to real address and to detect memory conflicts for shared logical memory in transactional memory based on logical (virtual) addresses comparisons.

Подробнее
14-01-2016 дата публикации

IMMEDIATE RELEASE PHARMACEUTICAL FORMULATION OF 4-[3-(4-CYCLOPROPANECARBONYL-PIPERAZINE-1-CARBONYL)-4-FLUORO-BENZYL]-2H-PHTHALAZIN-1-ONE

Номер: US20160008473A1
Принадлежит: AstraZeneca UK Limited

The present invention relates to a pharmaceutical formulation comprising the drug 4-[3-(4-cyclopropanecarbonyl-piperazine-1-carbonyl)-4-fluoro-benzyl]-2H-phthalazin-1-one in a solid dispersion with a matrix polymer that exhibits low hygroscopicity and high softening temperature, such as copovidone. The invention also relates to a daily pharmaceutical dose of the drug provided by such a formulation. In addition, the invention relates to the use of a matrix polymer that exhibits low hygroscopicity and high softening temperature in solid dispersion with 4-[3-(4-cyclopropanecarbonyl-piperazine-1-carbonyl)-4-fluoro-benzyl]-2H-phthalazin-1-one for increasing the bioavailability of the drug. 1. A pharmaceutical formulation comprising and active agent in solid dispersion with a matrix polymer , wherein the active agent is 4-[3-(4-cyclopropanecarbonyl-piperazine-1-carbonyl)-4-fluoro-benzyl]-2H-phthalazin-1-one or a salt or solvate thereof , and the matrix polymer exhibits low hygroscopicity and high softening temperature.2. The formulation as claimed in claim 1 , wherein the active agent is in stable amorphous form.3. The formulation as claimed in claim 2 , wherein at least 90% of the active agent is in amorphous form.4. The formulation as claimed in any one of to claim 2 , wherein the matrix polymer is selected from: copovidone claim 2 , hydroxypropyl methylcellulose phthalate (HPMCP) claim 2 , hydroxypropyl methylcellulose acetate succinate (HPMCAS) claim 2 , 2-hydroxypropyl-β-cyclodextrin (HPBCD) claim 2 , hydroxypropyl methylcellulose (Hypromellose claim 2 , HPMC) claim 2 , polymethacrylates claim 2 , hydroxypropyl cellulose (HPC) claim 2 , and cellulose acetate phthalate (CAP).5. The formulation as claimed in any one of to claim 2 , wherein the matrix polymer is copovidone.6. The formulation as claimed in claim 5 , wherein the copovidone is a co-polymer of 1-vinyl-2-pyrollidone and vinyl acetate in a ratio of 6:4 by mass.7. The formulation as claimed in claim 6 , ...

Подробнее
12-01-2017 дата публикации

TRANSACTIONAL MEMORY OPERATIONS WITH READ-ONLY ATOMICITY

Номер: US20170010929A1
Принадлежит:

Execution of a transaction mode setting instruction causes a computer processor to be in an atomic read-only mode ignoring conflicts to certain write-sets of a transaction during transactional execution. Read-set conflicts may still cause a transactional abort. Absent any aborting, the transaction's execution may complete, by committing transactional stores to memory and updating architecture states. 1. A computer implemented method for performing transactional memory operations in a multi-processor transactional execution (TX) environment , the method comprising:executing an instruction to cause a transaction to be executed, by a processor, in an atomic read-only transaction mode, execution in the atomic read-only transaction mode comprising:tracking memory read accesses as a read-set of the transaction;based on detecting a read-set conflict, aborting the transaction;suppressing any transaction abort due to conflicts of a write-set generated while in the atomic read-only transaction mode; andabsent any aborting, completing the transaction, the completing comprising committing stores executed in the transaction to memory and updating architecture states.2. The method of claim 1 , wherein the instruction is an enter TX read-only mode instruction that signals any one of:a beginning of the transaction, wherein executing the instruction causes the transaction to be started and executed, by the processor, in the atomic read-only transaction mode;a beginning of the atomic read-only transaction mode, wherein a preceding instruction causes the transaction to be started, by the processor, in a mode other than the atomic read-only transaction mode; anda resuming of the atomic read-only transaction mode, wherein executing a preceding instruction, by the processor, suspends the atomic read-only transaction mode.3. The method of claim 1 , wherein the atomic read-only transaction mode is reset based upon any one or more of:a completion of execution of a number of instructions ...

Подробнее
14-01-2016 дата публикации

INPUT/OUTPUT ACCELERATION IN VIRTUALIZED INFORMATION HANDLING SYSTEMS

Номер: US20160012003A1
Принадлежит:

Methods and systems for I/O acceleration on a virtualized information handling system include loading a storage virtual appliance as a virtual machine on a hypervisor. The hypervisor may execute using a first processor and a second processor. The storage virtual appliance is accessed by the hypervisor using a PCI-E device driver that is mapped to a first PCI-E NTB logical endpoint at the first processor. A second PCI-E device driver may be loaded on the storage virtual appliance that accesses the hypervisor and is mapped to a second PCI-E NTB logical endpoint at the second processor. A data transfer operation may be executed between a first memory space that is mapped to the first PCI-E NTB logical endpoint and a second memory space that is mapped to the second PCI-E NTB logical endpoint. The data transfer operation may be a read or a write operation. 1. A method executed using at least two processors , including a first processor and a second processor , the method comprising:loading a storage virtual appliance as a virtual machine on a hypervisor executing using the first processor and the second processor, wherein the storage virtual appliance is accessed by the hypervisor using a first Peripheral Component Interconnect Express (PCI-E) device driver that is mapped to a PCI-E non-transparent bridge (NTB) at a first PCI-E NTB logical endpoint at the first processor;loading a second PCI-E device driver on the storage virtual appliance that accesses the hypervisor and is mapped to the PCI-E NTB at a second PCI-E NTB logical endpoint at the second processor; andexecuting a data transfer operation between a first memory space that is mapped to the first PCI-E NTB logical endpoint and a second memory space that is mapped to the second PCI-E NTB logical endpoint,wherein the hypervisor executes in the first memory space,wherein the storage virtual appliance executes in the second memory space, andwherein the PCI NTB provides address translation between the first memory ...

Подробнее
11-01-2018 дата публикации

CONTROL STATE PRESERVATION DURING TRANSACTIONAL EXECUTION

Номер: US20180011765A1
Принадлежит:

A method includes saving a control state for a processor in response to commencing a transactional processing sequence, wherein saving the control state produces a saved control state. The method also includes permitting updates to the control state for the processor while executing the transactional processing sequence. Examples of updates to the control state include key mask changes, primary region table origin changes, primary segment table origin changes, CPU tracing mode changes, and interrupt mode changes. The method also includes restoring the control state for the processor to the saved control state in response to encountering a transactional error during the transactional processing sequence. In some embodiments, saving the control state comprises saving the current control state to memory corresponding to internal registers for an unused thread or another level of virtualization. A corresponding computer system and computer program product are also disclosed herein. 1. A method comprising:saving a control state for a processor in response to commencing a transactional processing sequence, wherein saving the control state produces a saved control state;permitting updates to the control state for the processor while executing the transactional processing sequence; andrestoring the control state for the processor to the saved control state in response to encountering a transactional error during the transactional processing sequence.2. The method of claim 1 , wherein saving the control state comprises saving the current control state to a backup set of internal control registers or registers corresponding to an unused thread or another level of virtualization.3. The method of claim 1 , wherein saving the control state comprises saving the current control state to a private location in memory.4. The method of claim 3 , wherein the private location is owned by an operating system thread or the central processing unit (CPU).5. The method of claim 1 , wherein ...

Подробнее
11-01-2018 дата публикации

CONTROL STATE PRESERVATION DURING TRANSACTIONAL EXECUTION

Номер: US20180011768A1
Принадлежит:

A method includes saving a control state for a processor in response to commencing a transactional processing sequence, wherein saving the control state produces a saved control state. The method also includes permitting updates to the control state for the processor while executing the transactional processing sequence. Examples of updates to the control state include key mask changes, primary region table origin changes, primary segment table origin changes, CPU tracing mode changes, and interrupt mode changes. The method also includes restoring the control state for the processor to the saved control state in response to encountering a transactional error during the transactional processing sequence. In some embodiments, saving the control state comprises saving the current control state to memory corresponding to internal registers for an unused thread or another level of virtualization. A corresponding computer system and computer program product are also disclosed herein. 18-. (canceled)9. A computer system comprising:one or more computer processors;one or more computer readable storage media and program instructions stored on the one or more computer readable storage media, the program instructions comprising instructions to perform:saving a control state for a processor in response to commencing a transactional processing sequence, wherein saving the control state produces a saved control state;permitting updates to the control state for the processor while executing the transactional processing sequence; andrestoring the control state for the processor to the saved control state in response to encountering a transactional error during the transactional processing sequence.10. The computer system of claim 9 , wherein saving the control state comprises saving the current control state to a backup set of internal control registers or registers corresponding to an unused thread or another level of virtualization.11. The computer system of claim 9 , wherein ...

Подробнее
10-01-2019 дата публикации

USING TRANSACTIONAL EXECUTION FOR RELIABILITY AND RECOVERY OF TRANSIENT FAILURES

Номер: US20190012241A1
Принадлежит:

Autonomous recovery from a transient hardware failure by executing portions of a stream of program instructions as a transaction. A start of a transaction is created in a stream of executing program instructions. A snapshot of a system state information is saved when the transaction begins. When a predefined number of program instructions in the stream are executed, the transaction ends, and store data of the transaction is committed. A new transaction then begins. If a transient hardware failure occurs, the transaction is aborted without notifying the computer software application that initiated the stream of program instructions. The transaction is re-executed, based on the saved snapshot of the system state information. 1. A method for autonomous recovery from a transient hardware failure by executing each portion of a stream of program instructions as a transaction , the method comprising:while executing a stream of program instructions on a computer system configured to support transactional execution mode processing:creating a start of a transactional region portion of the stream of program instructions wherein the portion of the stream of program instructions in the transactional region is to be executed as a transaction in transactional execution mode; 'in response to executing a predefined number of program instructions in the transactional region portion stream of program instructions, creating, by the operating system, an end of the transactional region portion;', 'in response to starting the transactional region in transactional execution mode committing, by the operating system, store data of a transaction to memory;', 'creating a subsequent start of a transactional region portion of the stream of program instructions to be executed as a transaction in transactional execution mode;, 'in response to ending execution of the transactional region in transactional execution mode aborting, by the operating system, the transaction without notifying a computer ...

Подробнее
19-01-2017 дата публикации

Power system utilizing processor core performance state control

Номер: US20170017287A1
Принадлежит: Dell Products LP

An information handling system includes a power supply coupled to a processor that includes a plurality of cores. A power system controller is coupled to the power supply and the processor. The power system controller may set each of the plurality of cores to a performance state that is below a highest performance state. The power system controller may then determine whether the power supplied from the power supply to the processor during operation is sufficient to operate each of the plurality of cores at the highest performance state. In response to the power being insufficient to operate each of the plurality of cores at the highest performance state, the power system controller may control the plurality of cores such that a subset operate at the highest performance state and the remainder operate at a performance state that is lower than the highest performance state.

Подробнее
17-04-2014 дата публикации

Method for Reducing Execution Jitter in Multi-Core Processors Within an Information Handling System

Номер: US20140108778A1
Принадлежит: Dell Products LP

A method of reducing execution jitter includes a processor having several cores and control logic that receives core configuration parameters. Control logic determines if a first set of cores are selected to be disabled. If none of the cores is selected to be disabled, the control logic determines if a second set of cores is selected to be jitter controlled. If the second set of cores is selected to be jitter controlled, the second set of cores is set to a first operating state. If the first set of cores is selected to be disabled, the control logic determines a second operating state for a third set of enabled cores. The control logic determines if the third set of enabled cores is jitter controlled, and if the third set of enabled cores is jitter controlled, the control logic sets the third set of enabled cores to the second operating state.

Подробнее
26-01-2017 дата публикации

TRANSACTIONAL MEMORY OPERATIONS WITH WRITE-ONLY ATOMICITY

Номер: US20170024318A1
Принадлежит:

Execution of a transaction mode setting instruction causes a computer processor to be in an atomic write-only mode ignoring conflicts to certain read-sets of a transaction during transactional execution. Write-set conflicts may still cause a transactional abort. Absent any aborting, the transaction's execution may complete, by committing transactional stores to memory and updating architecture states. 1. A computer implemented method for performing transactional memory operations in a multi-processor transactional execution (TX) environment , the method comprising: monitoring write-set cache lines of the transaction while in atomic write-only transaction mode; and', 'based on detecting a write-set conflict, aborting the transaction;', 'suppressing any transaction abort due to conflicts of a read-set generated while in the atomic write-only transaction mode; and', 'absent any aborting, completing the transaction, the completing comprising committing transactional stores to memory and updating architecture states., 'executing an instruction to cause a transaction be executed, by a processor, in an atomic write-only transaction mode, execution in the atomic write-only transaction mode comprising2. The method of claim 1 , wherein the instruction is an enter TX write-only mode instruction that signals any one of:a beginning of the transaction, wherein executing the instruction causes the transaction to be executed, by the processor, in the atomic write-only transaction mode;a beginning of the atomic write-only transaction mode, wherein a preceding instruction causes the transaction to be executed, by the processor, in a TX mode other than the atomic write-only transaction mode; anda resuming of the atomic write-only transaction mode, wherein executing a preceding instruction, by the processor, suspends the atomic write-only transaction mode.3. The method of claim 1 , wherein the atomic write-only transaction mode is reset based upon any one or more of:a completion of ...

Подробнее
25-01-2018 дата публикации

PC-RELATIVE ADDRESSING AND TRANSMISSION

Номер: US20180024835A1
Автор: Gschwind Michael Karl
Принадлежит:

Techniques for processing instructions include receiving a plurality of instructions from a program counter (PC) operable to be fused into a PC-relative plus offset instruction. The technique also includes fusing the plurality of instructions into an internal operation (IOP) that specifies PC-relative addressing with an offset. The technique also includes computing a shared PC portion that includes one or more common upper bits of a PC address of each of the plurality of instructions. If the shared PC portion is different than a previously computed shared PC portion, the technique transmits the shared PC portion to one or more downstream components in the processor pipeline. The technique further includes transmitting the IOP with a representation of lower order bits of the PC address and processing the IOP. 1. A method for processing instructions in a processor pipeline , comprising:receiving a plurality of instructions from a program counter (PC) operable to be fused into a PC-relative plus offset instruction;fusing the plurality of instructions into an internal operation (IOP) that specifies PC-relative addressing with an offset;computing a shared PC portion comprising one or more common upper bits of a PC address of each of the plurality of instructions;if the shared PC portion is different than a previously computed shared PC portion, transmitting the shared PC portion to one or more downstream components in the processor pipeline;transmitting the IOP with a representation of lower order bits of the PC address; andprocessing the IOP.2. The method of claim 1 , wherein the representation of lower order bits of the PC address further comprises a difference from a previous PC address.3. The method of claim 1 , wherein the representation of lower order bits of the PC address further comprises one or more lower order bits of the PC address.4. The method of claim 1 , wherein transmitting the IOP further comprises transmitting the IOP to an instruction sequencing unit ...

Подробнее
23-01-2020 дата публикации

REGULATING HARDWARE SPECULATIVE PROCESSING AROUND A TRANSACTION

Номер: US20200026558A1
Принадлежит:

A transaction is detected. The transaction has a begin-transaction indication and an end-transaction indication. If it is determined that the begin-transaction indication is not a no-speculation indication, then the transaction is processed. 1. A method comprising:determining, by one or more computer processors, that instructions preceding a transaction have not completed;prohibiting, by one or more computer processors, the transaction from being processed until a determination is made that indicates that all pending outside instructions are not, or are no longer, being processed in a speculative manner; anddetermining, by one or more computer processors, that an end-transaction indication associated with the transaction indicates an end to a period of no-speculation transaction processing.2. The method of claim 1 , wherein the transaction comprises two or more instructions to be processed atomically on a data structure in a memory.3. The method of claim 1 , the method comprising:determining, by one or more computer processors, that a begin-transaction indication associated with a transaction is a no-speculation indication, wherein the begin-transaction indication is selected from the group consisting of: a new instruction, a new prefix instruction, or a variant of an instruction in a current instruction set architecture.4. The method of claim 1 , the method comprising:determining, by one or more computer processors, that the instructions preceding the transaction have completed; andresponsive to determining that the instructions preceding the transaction have completed, processing, by one or more computer processors, the transaction.5. The method of claim 4 , the method comprising:responsive to processing the transaction, determining, by one or more computer processors, whether an end-transaction indication associated with the transaction is a no-speculation indication; andresponsive to determining that the end-transaction indication is the no-speculation ...

Подробнее
23-01-2020 дата публикации

PREFETCH PROTOCOL FOR TRANSACTIONAL MEMORY

Номер: US20200026651A1
Принадлежит:

Providing control over processing of a prefetch request in response to conditions in a receiver of the prefetch request and to conditions in a source of the prefetch request. A processor generates a prefetch request and a tag that dictates processing the prefect request. A processor sends the prefetch request and the tag to a second processor. A processor generates a conflict indication based on whether a concurrent processing of the prefetch request and an atomic transaction by the second processor would generate a conflict with a memory access that is associated with the atomic transaction. Based on an analysis of the conflict indication and the tag, a processor processes (i) either the prefetch request or the atomic transaction, or (ii) both the prefetch request and the atomic transaction. 1. A method to control processing of a prefetch request , the method comprising:generating, by a first processors in a multiprocessor system, a prefetch request and a tag that dictates processing the prefect request;sending, by one or more processors in the multiprocessor system, the prefetch request and the tag to a second processor in the multiprocessor system;generating, by one or more processors in the multiprocessor system, a conflict indication based on whether a concurrent processing of the prefetch request and an atomic transaction by the second processor would generate a conflict with a memory access that is associated with the atomic transaction; andbased on an analysis of the conflict indication and the tag, processing, by the second processor in the multiprocessor system (i) either the prefetch request or the atomic transaction, or (ii) both the prefetch request and the atomic transaction.2. The method of claim 1 , further comprising:generating, by the first processors in a multiprocessor system, the tag of the prefetch request according to a prefetch protocol, wherein the prefetch request includes (a) a description of at least one prefetch request operation and (b) ...

Подробнее
02-02-2017 дата публикации

Scheme for determining data object usage in a memory region

Номер: US20170031812A1
Принадлежит: International Business Machines Corp

Method and apparatus for managing memory is disclosed herein. In one embodiment, the method includes specifying a first load-monitored region within a memory, configuring a performance monitor to count object pointer accessed events associated with the first load-monitored region, executing a CPU instruction to load a pointer that points to a first location in the memory, responsive to determining that the first location is within the first load-monitored region, triggering an object pointer accessed event, updating a count of object pointer accessed events in the performance monitor, and performing garbage collection on the first load-monitored region based on the count of object pointer accessed events.

Подробнее
02-02-2017 дата публикации

Multi-section garbage collection method

Номер: US20170031813A1
Принадлежит: International Business Machines Corp

Apparatus for garbage collection is disclosed herein. The apparatus includes a processor that includes a load-monitored region register. A memory stores program code, which, when executed on the processor performs an operation for garbage collection, the operation includes specifying a load-monitored region within a memory managed by a runtime environment; enabling a load-monitored event-based branch configured to occur responsive to executing a first type of CPU instruction to load a pointer that points to a first location in the load-monitored region; performing a garbage collection process in background without pausing executing in the runtime environment; executing a CPU instruction of the first type to load a pointer that points to the first location in the load-monitored region; responsive to triggering a load-monitored event-based branch, moving an object pointed to by the pointer with a handler from the first location in memory to a second location in memory.

Подробнее
02-02-2017 дата публикации

SCHEME FOR DETERMINING DATA OBJECT USAGE IN A MEMORY REGION

Номер: US20170031814A1
Принадлежит:

Method and apparatus for managing memory is disclosed herein. In one embodiment, the method includes specifying a first load-monitored region within a memory, configuring a performance monitor to count object pointer accessed events associated with the first load-monitored region, executing a CPU instruction to load a pointer that points to a first location in the memory, responsive to determining that the first location is within the first load-monitored region, triggering an object pointer accessed event, updating a count of object pointer accessed events in the performance monitor, and performing garbage collection on the first load-monitored region based on the count of object pointer accessed events. 1. A method for managing memory , comprising:specifying a first load-monitored region within a memory;configuring a performance monitor to count object pointer accessed events associated with the first load-monitored region;executing a CPU instruction to load a pointer that points to a first location in the memory;responsive to determining that the first location is within the first load-monitored region, triggering an object pointer accessed event;updating a count of object pointer accessed events in the performance monitor; andperforming garbage collection on the first load-monitored region based on the count of object pointer accessed events.2. The method of claim 1 , further comprising:loading the specified load-monitored region into a load-monitored region register that is initialized to designate the area of memory currently being evaluated.3. The method of claim 2 , comprising:loading a section of the specified load-monitored region into a load-monitored section enable register that enables a section in the load-monitored region.4. The method of claim 1 , further comprising:initializing the performance monitor to a desired value;initializing a timeout to occur after a specified period of time; anddetermining a rate of object access based on the count after ...

Подробнее
02-02-2017 дата публикации

MULTI-SECTION GARBAGE COLLECTION METHOD

Номер: US20170031817A1
Принадлежит:

A method and apparatus for garbage collection is disclosed herein. The method includes specifying a load-monitored region within a memory managed by a run-time environment, enabling a load-monitored event-based branch configured to occur responsive to executing a first type of CPU instruction to load a pointer that points to a first location in the load-monitored region, performing a garbage collection process in background without pausing executing in the run-time environment, executing a CPU instruction of the first type to load a pointer that points to the first location in the load-monitored region, and responsive to triggering a load-monitored event-based branch, moving an object pointed to by the pointer with a handler from the first location in memory to a second location in memory. 1. A method for garbage collection , comprising:specifying a load-monitored region within a memory managed by a runtime environment;enabling a load-monitored event-based branch configured to occur responsive to executing a first type of CPU instruction to load a pointer that points to a first location in the load-monitored region;performing a garbage collection process in background without pausing executing in the runtime environment;executing a CPU instruction of the first type to load a pointer that points to the first location in the load-monitored region;responsive to triggering a load-monitored event-based branch, moving an object pointed to by the pointer with a handler from the first location in memory to a second location in memory.2. The method of claim 1 , further comprising:updating a table that tracks the movement of an object pointed to by the pointer with a handler from the first location in memory to a second location in memory.3. The method of claim 1 , wherein enabling the load-monitored event-based branch comprises verifying that the load-monitored event-based branch was caused by a garbage collection hardware.4. The method of claim 1 , further comprising: ...

Подробнее
31-01-2019 дата публикации

USING TRANSACTIONAL EXECUTION FOR RELIABILITY AND RECOVERY OF TRANSIENT FAILURES

Номер: US20190034296A1
Принадлежит:

Autonomous recovery from a transient hardware failure by executing portions of a stream of program instructions as a transaction. A start of a transaction is created in a stream of program instructions executing on a first processor of a multi-processor computer. A snapshot of a system state information is saved when the transaction begins. When the transaction ends, store data of the transaction is committed. If a transient hardware failure occurs, the transaction is aborted without notifying the computer software application that initiated the stream of program instructions. The transaction is re-executed on a second processor of the multi-processors, based on the saved snapshot of the system state information. 1. A method for autonomous recovery from a transient hardware failure by executing each portion of a stream of program instructions as a transaction , the method comprising:creating, by an operating system on a multi-processor computer system configured to support transactional execution mode processing, a start of a transactional region portion of a stream of program instructions wherein the portion of the stream of program instructions in the transactional region is to be executed as a transaction in transactional execution mode;starting the transactional region in transactional execution mode on a first processor of the multi-processors;creating, by the operating system, an end of the transactional region portion of the stream of program instructions;in response to ending execution of the transactional region in transactional execution mode, committing, by the operating system, store data of a transaction to memory; and aborting, by the operating system, the transaction without notifying a computer software application that initiated the stream of program instructions; and', 're-executing, by the operating system on a second processor of the multi-processors, the transactional region portion stream of program instructions as a transaction in ...

Подробнее
09-02-2017 дата публикации

COMPILING SOURCE CODE TO REDUCE RUN-TIME EXECUTION OF VECTOR ELEMENT REVERSE OPERATIONS

Номер: US20170039047A1
Принадлежит:

Compiling source code to reduce run-time execution of vector element reverse operations, includes: identifying, by a compiler, a first loop nested within a second loop in a computer program; identifying, by the compiler, a vector element reverse operation within the first loop; moving, by the compiler, the vector element reverse operation from the first loop to the second loop. 1. A method of compiling source code to reduce run-time execution of vector element reverse operations , the method comprising:identifying, by a compiler, a first loop in a computer program;identifying, by the compiler, at least one vector element reverse operation within the first loop;analyzing, by the compiler, a dataflow graph containing that at least one vector element reverse operation within the first loop, including determining whether all vector operations in a portion of the dataflow graph including the first loop are lane-insensitive and determining whether all vector operations in the portion of the dataflow graph containing the first loop are lane-adjustable; andresponsive to the analysis, replacing, by the compiler, the vector element reverse operations from the first loop by vector element reverse operations outside the first loop.2. The method of wherein:identifying at least one vector element reverse operation within the first loop further comprises identifying t least one vector operation within the first loop having a live-in vector value; andreplacing the vector element reverse operations from the first loop by vector element reverse operations outside the first loop further comprises inserting vector element reverse operations at an incoming perimeter of the first loop.3. The method of wherein:identifying at least one vector element reverse operation within the first loop further comprises identifying at least one vector operation within the first loop having a live-out vector value; andreplacing the vector element reverse operations from the first loop by vector element ...

Подробнее
09-02-2017 дата публикации

COMPILING SOURCE CODE TO REDUCE RUN-TIME EXECUTION OF VECTOR ELEMENT REVERSE OPERATIONS

Номер: US20170039048A1
Принадлежит:

Compiling source code to reduce run-time execution of vector element reverse operations, includes: identifying, by a compiler, a first loop nested within a second loop in a computer program; identifying, by the compiler, a vector element reverse operation within the first loop; moving, by the compiler, the vector element reverse operation from the first loop to the second loop. 17-. (canceled)8. An apparatus for compiling source code to reduce run-time execution of vector element reverse operations , the apparatus comprising a computer processor , a computer memory operatively coupled to the computer processor , the computer memory having disposed within it computer program instructions that , when executed by the computer processor , cause the apparatus to carry out the steps of:identifying, by a compiler, a first loop in a computer program;identifying, by the compiler, at least one vector element reverse operation within the first loop;analyzing, by the compiler, a dataflow graph containing that at least one vector element reverse operation within the first loop, including determining whether all vector operations in a portion of the dataflow graph including the first loop are lane-insensitive and determining whether all vector operations in the portion of the dataflow graph containing the first loop are lane-adjustable; andresponsive to the analysis, replacing, by the compiler, the vector element reverse operations from the first loop by vector element reverse operations outside the first loop.9. The apparatus of wherein:identifying at least one vector element reverse operation within the first loop further comprises identifying t least one vector operation within the first loop having a live-in vector value; andreplacing the vector element reverse operations from the first loop by vector element reverse operations outside the first loop further comprises inserting vector element reverse operations at an incoming perimeter of the first loop.10. The apparatus of ...

Подробнее
01-05-2014 дата публикации

CONFIDENCE-DRIVEN SELECTIVE PREDICATION OF PROCESSOR INSTRUCTIONS

Номер: US20140122836A1
Автор: Gschwind Michael Karl

An apparatus includes a network interface, memory, and a processor. The processor is coupled with the network interface and memory. The processor is configured to determine that an instruction instance is a branch instruction instance. Responsive to a determination that an instruction instance is a branch instruction instance, the processor is configured to obtain a branch prediction for the branch instruction instance and a confidence value of the branch prediction. The processor is further configured to determine that the confidence for the branch prediction is low based on the confidence value, and responsive to such a determination, generate predicated instruction instances based on the branch instruction instance. 1. A method comprising:determining that an instruction instance is a branch instruction instance;responsive to determining that the instruction instance is a branch instruction instance, obtaining a branch prediction for the branch instruction instance and a confidence value of the branch prediction;determining that confidence for the branch prediction is low based on the confidence value; andresponsive to said determining that confidence for the branch prediction is low based on the confidence value, generating predicated instruction instances based on the branch instruction instance.2. The method of claim 1 , wherein said determining that confidence for the branch prediction is low based on the confidence value comprises determining that the confidence value is less than a threshold value.3. The method of claim 2 , wherein determining that the confidence value is less than the threshold value comprises:setting the threshold value hosted in a register; andcomparing the confidence value to the threshold value.4. The method of claim 3 , wherein said setting the threshold value comprises changing the threshold value based claim 3 , at least in part claim 3 , on anticipated behavior of a program that comprises the branch instruction instance.5. The ...

Подробнее
18-02-2016 дата публикации

Compiler optimizations for vector instructions

Номер: US20160048379A1
Принадлежит: International Business Machines Corp

An optimizing compiler includes a vector optimization mechanism that optimizes vector instructions by eliminating one or more vector element reverse operations. The compiler can generate code that includes multiple vector element reverse operations that are inserted by the compiler to account for a mismatch between the endian bias of the instruction and the endian preference indicated by the programmer or programming environment. The compiler then analyzes the code and reduces the number of vector element reverse operations to improve the run-time performance of the code.

Подробнее
18-02-2016 дата публикации

COMPILER OPTIMIZATIONS FOR VECTOR INSTRUCTIONS

Номер: US20160048445A1
Принадлежит:

An optimizing compiler includes a vector optimization mechanism that optimizes vector instructions by eliminating one or more vector element reverse operations. The compiler can generate code that includes multiple vector element reverse operations that are inserted by the compiler to account for a mismatch between the endian bias of the instruction and the endian preference indicated by the programmer or programming environment. The compiler then analyzes the code and reduces the number of vector element reverse operations to improve the run-time performance of the code. 19-. (canceled)10. A computer-implemented method executed by at least one processor for processing a plurality of instructions in a computer program , the method comprising:providing a computer program including a plurality of instructions that includes at least one vector operation; andprocessing the plurality of instructions to eliminate at least one vector element reverse operation from the computer program to enhance run-time performance of the computer program.11. The method of further comprising:identifying a first vector element reverse operation and a second vector element reverse operation in the computer program, such that the result of the first vector element reverse operation is the source of the second vector element reverse operation; andeliminating at least one of the first and second vector element reverse operations.12. The method of further comprising:identifying a computation in the computer program where all operations performed on input vectors are single instruction multiple data (SIMD) instructions; andeliminating the at least one vector element reverse instruction that corresponds to the computation13. The method of further comprising:identifying a unary operation accompanied by at least one vector element reverse operation; andchanging order of instructions for the unary operation and the at least one vector element reverse operation.14. The method of further comprising: ...

Подробнее
16-02-2017 дата публикации

PROCESSOR INSTRUCTION SEQUENCE TRANSLATION

Номер: US20170046157A1
Принадлежит:

Computer readable medium and apparatus for translating a sequence of instructions is disclosed herein. In one embodiment, an operation includes recognizing a candidate multi-instruction sequence, determining that the multi-instruction sequence corresponds to a single instruction, and executing the multi-instruction sequence by executing the single instruction. 17.-. (canceled)8. A system , comprising:a memory storing a plurality of program instructions; and retrieve a multi-instruction sequence from the plurality of program instructions;', 'responsive to determining that operations of the multi-instruction sequence is functionally equivalent to to operation of a first instruction, replace the multi-instruction sequence with the first instruction; and', 'execute the multi-instruction sequence by executing the first instruction., 'a processor comprising a translator module configured to9. The system of claim 8 , wherein retrieving the multi-instruction sequence comprises:fetching a first block of instructions of the multi-instruction sequence from a cache; andscanning the first block of instructions to determine whether the first block of instructions is recognized.10. The system of claim 8 , wherein responsive to determining that operations of the multi-instruction sequence is functionally equivalent to operation of the first instruction claim 8 , determining whether the first block of instructions can be fully contained in a single line of a cache.11. The system of claim 10 , wherein responsive to determining that the first block of instructions can be fully contained in a single line of the cache claim 10 , replacing the first block of instructions with a second block of instructions in the cache.12. The system of claim 11 , wherein the second block of instructions is a single instruction corresponding to the multi-instruction sequence.13. The system of claim 11 , wherein executing the multi-instruction sequence by executing the single instruction comprises: ...

Подробнее
16-02-2017 дата публикации

PROCESSOR INSTRUCTION SEQUENCE TRANSLATION

Номер: US20170046163A1
Принадлежит:

Method for translating a sequence of instructions is disclosed herein. In one embodiment, the method includes recognizing a candidate multi-instruction sequence, determining that the multi-instruction sequence corresponds to a single instruction, and executing the multi-instruction sequence by executing the single instruction. 1. A method of executing processor instructions , comprising:retrieving a multi-instruction sequence from memory for execution by a processor;responsive to determining that operations of the multi-instruction sequence is functionally equivalent to operation of a first instruction, replacing the multi-instruction sequence with the first instruction;executing the multi-instruction sequence by executing the first instruction.2. The method of claim 1 , wherein retrieving the multi-instruction sequence comprises:fetching a first block of instructions of the multi-instruction sequence from an instruction buffer; andscanning the first block of instructions to determine whether the first block of instructions is recognized.3. The method of claim 1 , wherein responsive to determining that operations of the multi-instruction sequence is functionally equivalent to operation of the first instruction claim 1 , determining whether the first block of instructions can be fully contained in a single line of a cache.4. The method of claim 3 , wherein responsive to determining that the first block of instructions can be fully contained in a single line of the cache claim 3 , replacing the first block of instructions with a second block of instructions in the cache.5. The method of claim 4 , wherein the second block of instructions is a single instruction corresponding to the multi-instruction sequence.6. The method of claim 4 , wherein executing the multi-instruction sequence by executing the single instruction comprises:executing the second block of instructions.7. The method of claim 1 , wherein the multi-instruction sequence is replaced with the first ...

Подробнее
22-02-2018 дата публикации

COMPILER OPTIMIZATIONS FOR VECTOR OPERATIONS THAT ARE REFORMATTING-RESISTANT

Номер: US20180052670A1
Принадлежит:

An optimizing compiler includes a vector optimization mechanism that optimizes vector operations that are reformatting-resistant, such as source instructions that do not have a corresponding reformatting operation, sink instructions that do not have a corresponding reformatting operation, a source instruction that is a scalar value, a sink instruction that may produce a scalar value, and an internal operation that depends on lanes being in a specified order. The ability to optimize vector instructions that are reformatting-resistant reduces the number of operations to improve the run-time performance of the code. 1. An apparatus comprising:at least one processor;a memory coupled to the at least one processor;a computer program residing in the memory, the computer program including a plurality of instructions that includes at least one vector operation and that includes a plurality of reformatting-resistant vector operations that comprises a sink instruction without a corresponding reformatting operation; anda compiler residing in the memory and executed by the at least one processor, the compiler including a vector instruction optimization mechanism that optimizes at least one of the plurality of reformatting-resistant vector operations in the computer program to enhance run-time performance of the computer program.2. The apparatus of wherein the plurality of reformatting-resistant vector operations comprises a source instruction without a corresponding reformatting operation.3. The apparatus of wherein the plurality of reformatting-resistant vector operations comprises a source instruction that operates on a scalar value.4. The apparatus of wherein the plurality of reformatting-resistant vector operations comprises a sink instruction that can produce a scalar value.5. The apparatus of wherein the plurality of reformatting-resistant vector operations comprises an internal operation that depends on lanes being in a specified order.6. The apparatus of wherein the ...

Подробнее
23-02-2017 дата публикации

COMPILER OPTIMIZATIONS FOR VECTOR OPERATIONS THAT ARE REFORMATTING-RESISTANT

Номер: US20170052768A1
Принадлежит:

An optimizing compiler includes a vector optimization mechanism that optimizes vector operations that are reformatting-resistant, such as source instructions that do not have a corresponding reformatting operation, sink instructions that do not have a corresponding reformatting operation, a source instruction that is a scalar value, a sink instruction that may produce a scalar value, and an internal operation that depends on lanes being in a specified order. The ability to optimize vector instructions that are reformatting-resistant reduces the number of operations to improve the run-time performance of the code. 19-. (canceled)10. A computer-implemented method executed by at least one processor for processing a plurality of instructions in a computer program , the method comprising:providing a computer program including a plurality of instructions that includes at least one vector operation and that includes a plurality of reformatting-resistant vector operations; andoptimizing at least one of the plurality of reformatting-resistant vector operations in the computer program to enhance run-time performance of the computer program.11. The method of wherein the at least one reformatting-resistant vector operation comprises a source instruction without a corresponding reformatting operation.12. The method of wherein the at least one reformatting-resistant vector operation comprises a sink instruction without a corresponding reformatting operation.13. The method of wherein the at least one reformatting-resistant vector operation comprises a source instruction that operates on a scalar value.14. The method of wherein the at least one reformatting-resistant vector operation comprises a sink instruction that can produce a scalar value.15. The method of wherein the at least one reformatting-resistant vector operation comprises an internal operation that depends on lanes being in a specified order.16. The method of wherein the vector instruction optimization mechanism ...

Подробнее
23-02-2017 дата публикации

Compiler optimizations for vector operations that are reformatting-resistant

Номер: US20170052769A1
Принадлежит: International Business Machines Corp

An optimizing compiler includes a vector optimization mechanism that optimizes vector operations that are reformatting-resistant, such as source instructions that do not have a corresponding reformatting operation, sink instructions that do not have a corresponding reformatting operation, a source instruction that is a scalar value, a sink instruction that may produce a scalar value, and an internal operation that depends on lanes being in a specified order. The ability to optimize vector instructions that are reformatting-resistant reduces the number of operations to improve the run-time performance of the code.

Подробнее
20-02-2020 дата публикации

System and method for sensor coordination

Номер: US20200057954A1
Автор: Michael Karl Binder
Принадлежит: Raytheon Co

A system and method for selecting one sensor from among a plurality of sensors. For each of the plurality of sensors, a conditional probability of the sensor correctly identifying the target from among a plurality of objects detected by the sensor, given an association event, is calculated, and multiplied by a reward function for the sensor. The sensor for which this product is greatest is selected.

Подробнее
12-03-2015 дата публикации

FOOTWEAR ASSEMBLIES HAVING REINFORCED INSOLE PORTIONS AND ASSOCIATED METHODS

Номер: US20150068066A1
Принадлежит:

Footwear assemblies including reinforced insole portions and associated methods of use and manufacture are disclosed herein. In one embodiment, a footwear assembly includes an upper coupled to an insole. The insole includes a first surface opposite a second surface. The first surface is configured to face a user's foot when inserted in the upper. The upper at least partially wraps around and is stitched directly to the second surface of the insole. The footwear assembly further includes a midsole adjacent to the second surface of the insole, and an outsole adjacent to the midsole. 1. A footwear assembly having an arch portion between a heel portion and a forefoot portion , the footwear assembly comprising:an upper having a peripheral lower edge portion; a first insole board extending from the heel portion to the forefoot portion;', 'a heel counter adjacent to the first insole board and positioned at the heel portion; and', 'a second insole board adjacent to the heel counter and positioned at the heel portion, the second insole board having a lower surface opposite the heel counter, and wherein the lower edge portion of the upper at wraps around inwardly over at least a portion of the lower surface of the second insole board and directly to an upper surface of the forefoot portion of the first insole board;, 'an insole adjacent to the upper, the insole including—'}stitching securing the lower edge portion of the upper directly to the lower surface of the second insole board;a midsole coupled to the insole; andan outsole coupled to the midsole.2. The footwear assembly of wherein the lower edge portion of the upper is adhered to the insole or the midsole at the arch portion without being stitched thereto.3. The footwear assembly of wherein the stitching does not secure the lower edge portion of the upper to the first insole board at the heel portion.4. The footwear assembly of wherein the lower edge portion of the upper flares outwardly on the upper surface at the ...

Подробнее
08-03-2018 дата публикации

ENABLING END OF TRANSACTION DETECTION USING SPECULATIVE LOOK AHEAD

Номер: US20180066385A1
Принадлежит:

A transaction within a computer program or computer application comprises program instructions performing multiple store operations that appear to run and complete as a single, atomic operation. The program instructions forming a current transaction comprise a transaction begin indicator, a plurality of instructions (e.g., store operations), and a transaction end indicator. A near-end of transaction indicator is triggered based on a speculative look ahead operation such that an interfering transaction requiring a halt operation may be delayed to allow the current transaction to end. A halt operation, also referred to as an abort operation, as used herein refers to an operation responsive to a condition where two transactions have been detected to interfere where at least one transaction must be aborted and the state of the processor is reset to the state at the beginning of the aborted transaction by performing a rollback. 1. A method for enabling end of transaction detection using speculative look ahead , the method comprising:setting an indicator to indicate a current transaction is being processed;monitoring a speculative look ahead operation to detect whether an end-transaction instruction corresponding to the current transaction is likely to occur; andresponsive to detecting the end-transaction instruction, enabling a near-end-transaction processing mode.2. The method of claim 1 , wherein the current transaction comprises a begin-transaction instruction and program instructions immediately following the begin-transaction instruction up to claim 1 , and including claim 1 , the end-transaction instruction claim 1 , wherein the end-transaction instruction corresponds to the begin-transaction instruction.3. The method of claim 1 , wherein the speculative look ahead operation comprises decoding one or more program instructions prior to executing the one or more program instructions.4. The method of claim 1 , further comprising:receiving a notification of an ...

Подробнее
28-02-2019 дата публикации

Deferred response to a prefetch request

Номер: US20190065378A1
Принадлежит: International Business Machines Corp

Modifying prefetch request processing. A prefetch request is received by a local computer from a remote computer. The local computer responds to a determination that execution of the prefetch request is predicted to cause an address conflict during an execution of a transaction of the local processor by determining an evaluation of the prefetch request prior to execution of the program instructions included in the prefetch request. The evaluation is based, at least in part, on (i) a comparison of a priority of the prefetch request with a priority of the transaction and (ii) a condition that exists in one or both of the local processor and the remote processor. Based on the evaluation, the local computer modifies program instructions that govern execution of the program instructions included in the prefetch request.

Подробнее
10-03-2016 дата публикации

TABLE OF CONTENTS POINTER VALUE SAVE AND RESTORE PLACEHOLDER POSITIONING

Номер: US20160070548A1
Принадлежит:

Embodiments describe a computer implemented method of compiling application source code into application object code. A compiler generates application object code having a plurality of table of contents TOC placeholder locations for a potential TOC pointer value command within a calling function. A first function call site of the calling function is corresponded to a first TOC placeholder location. A second function call site of the calling function is corresponded to a second TOC placeholder location. 110-. (canceled)11. A computer program product for compiling application source code into application object code , the computer program product comprising: generating application object code, with a compiler, having a plurality of table of contents (TOC) placeholder locations for a potential TOC pointer value command within a calling function;', 'corresponding a first function call site of the calling function to a first TOC placeholder location;', 'corresponding a second function call site of the calling function to a second TOC placeholder location;', 'identifying, by the compiler, the first function call site from the calling function to a first callee function;', 'positioning the first TOC placeholder location for the potential TOC pointer value command of the calling function at a location in a first region of application object code before the first function call site but after an instruction that invalidates the potential TOC pointer value command, wherein the positioned first TOC placeholder has a first usage of computer resources when the calling function is executed with the potential TOC pointer value command;', 'identifying the first TOC placeholder location is dominating the first function call site;', 'inserting a TOC placeholder instruction in the first TOC placeholder location; and', 'inserting a first function call instruction with a relocation indicator pointing to the TOC placeholder instruction at the first function call site., 'a computer ...

Подробнее
28-02-2019 дата публикации

Control system for learning and surfacing feature correlations

Номер: US20190065983A1
Принадлежит: Microsoft Technology Licensing LLC

A plurality of different hosted services each include enabling logic that enables a set of actions. Usage data for a plurality of different tenants is accessed and actions are grouped into features based upon underlying enabling logic. A correlation score between features is identified based on tenant usage data for those features. A tenant under analysis is selected and usage data for the tenant under analysis is used to identify related features that the tenant under analysis is not using, based upon the correlation scores for the features. An output system is controlled to surface the related features for the tenant under analysis.

Подробнее
27-02-2020 дата публикации

MEMORY ACCESS REQUEST FOR A MEMORY PROTOCOL

Номер: US20200065138A1
Принадлежит:

A computer-implemented method includes identifying two or more memory locations and referencing, by a memory access request, the two or more memory locations. The memory access request is a single action pursuant to a memory protocol. The computer-implemented method further includes sending the memory access request from one or more processors to a node and fetching, by the node, data content from each of the two or more memory locations. The computer-implemented method further includes packaging, by the node, the data content from each of the two or more memory locations into a memory package, and returning the memory package from the node to the one or more processors. A corresponding computer program product and computer system are also disclosed. 1. A computer-implemented method comprising:identifying two or more memory locations, each of the two or more memory locations corresponding to a different word in a memory;referencing, by a memory access request, the two or more memory locations;sending the memory access request from one or more processors to a node, the memory access request comprising a single action pursuant to a memory protocol, wherein the node is configured for managing requests under the memory protocol;fetching, by the node, data content from each of the two or more memory locations;packaging, by the node, the data content from each of the two or more memory locations into a memory package;returning the memory package from the node to the one or more processors; andinitiating a transaction in a transactional memory environment, wherein the two or more memory locations are used in the transaction.2. The computer-implemented method of claim 1 , wherein the node is of at least one type selected from the group consisting of:(a) one or more additional processors;(b) a memory controller; and(c) a cache controller.3. The computer-implemented method of claim 1 , wherein:the one or more processors communicate with the node via a system bus;the one or ...

Подробнее
09-03-2017 дата публикации

DEBUGGER DISPLAY OF VECTOR REGISTER CONTENTS AFTER COMPILER OPTIMIZATIONS FOR VECTOR INSTRUCTIONS

Номер: US20170068610A1
Принадлежит:

An optimizing compiler includes a vector optimization mechanism that optimizes vector instructions by eliminating one or more vector element reverse operations. The compiler can generate code that includes multiple vector element reverse operations that are inserted by the compiler to account for a mismatch between the endian bias of the instruction and the endian preference indicated by the programmer or programming environment. The compiler then analyzes the code and reduces the number of vector element reverse operations to improve the run-time performance of the code. The compiler generates a debugger table that specifies which instructions have corresponding reformatting operations. A debugger then uses the debugger table to display contents of the vector register, which is displayed in regular form as well as in a form that is reformatted according to information in the debugger table. 16-. (canceled)7. A computer-implemented method executed by at least one processor for debugging a plurality of instructions in a computer program , the method comprising:providing a computer program including a plurality of instructions that includes at least one vector instruction;compiling the plurality of instructions and generating a debug table that specifies a vector register, a corresponding address range for the specified vector register, and a corresponding endian reformatting type for the specified vector register; andrunning a debugger, the debugger receiving a request to display contents of a vector register at a current instruction in the computer program, and in response, the debugger determines whether the current instruction has an address within an address range in the debug table, and when the current instruction is not within an address range in the debug table, the debugger displays the contents of the vector register, and when the current instruction has an address within an address range of the debug table, the debugger determines from the debug table an ...

Подробнее
27-02-2020 дата публикации

HEARING AID WITH DISTRIBUTED PROCESSING IN EAR PIECE

Номер: US20200068319A1
Автор: Sacha Michael Karl
Принадлежит:

Disclosed herein, among other things, are methods and apparatus for hearing assistance devices, and in particular to behind the ear and receiver in canal hearing aids with distributed processing. One aspect of the present subject matter relates to a hearing assistance device including hearing assistance electronics in a housing configured to be worn above or behind an ear of a wearer. The hearing assistance device includes an ear piece configured to be worn in the ear of the wearer and a processing component at the ear piece configured to perform functions in the ear piece and to communicate with the hearing assistance electronics, in various embodiments. 1. A hearing assistance device , comprising:hearing assistance electronics in a housing configured to be worn above or behind an ear of a wearer;an ear piece configured to be worn in the ear of the wearer; anda processing component at the ear piece configured to perform functions in the ear piece and to communicate with the hearing assistance electronics using a wired connection.2. The device of claim 1 , wherein the processing component includes a microcontroller.3. The device of claim 1 , wherein the processing component includes a microprocessor.4. The device of claim 1 , wherein the processing component includes a digital signal processor (DSP).5. The device of claim 1 , wherein the processing component includes a custom chip design.6. The device of claim 1 , wherein the processing component includes combinational logic.7. The device of claim 1 , wherein the processing component is configured to communicate with the hearing assistance electronics using a single wire.8. The device of claim 1 , wherein the ear piece includes a receiver configured to convert an electrical signal from the heating assistance electronics to an acoustic signal.9. The device of claim 1 , wherein the ear piece includes a giant magnetoresistive (GMR) sensor.10. The device of claim 9 , wherein the processing component is configured to ...

Подробнее
24-03-2022 дата публикации

VEHICLE CHARGING SYSTEMS AND METHODS OF OPERATING THEREOF

Номер: US20220089055A1
Принадлежит: Intrinsic Power, Inc.

Provided are electric vehicle charging systems (EV charging systems) and methods of operating such systems for charging electric vehicles (EVs). A system, which may be referred to as electric vehicle service equipment (EVSE), comprises one or more EV charging ports (e.g., charge handles) for connecting to EVs. The system may include various features to identify specific EVs. The system also includes a grid connector for connecting to an external power grid and, in some examples, to monitor the power grid conditions (e.g., voltage, AC frequency). The system also includes a system controller, configured to control the power output at each EV charging port based on, e.g., vehicle charging requirements and/or available grid power. The system can also include an integrated battery, serving as a backup and/or an addition to the power grid. Furthermore, in some examples, the system includes a solar connector (e.g., with an integrated inverter) for connection to an external solar array. 1. An electric vehicle charging system comprising:an electric vehicle charging port, for connecting to an electric vehicle and charging the electric vehicle;an integrated battery;a grid connector, for connecting to an external electrical grid;a power conversion module, electrically coupled to each of the electric vehicle charging port, the integrated battery, and the grid connector and comprising at least one inverter; and 'wherein the system controller is configured to control electrical power output of at least one of the electric vehicle charging port or the integrated battery based on at least one of charging requirement of the electric vehicle and power availability from the external electrical grid.', 'a system controller, communicatively coupled to at least the power conversion module,'}2. The electric vehicle charging system of claim 1 , wherein the system controller is configured to communicatively couple to the external electrical grid.3. The electric vehicle charging system of ...

Подробнее
05-06-2014 дата публикации

HEARING AID WITH MAGNETOSTRICTIVE ELECTROACTIVE SENSOR

Номер: US20140153760A1
Принадлежит: Starkey Laboratories, Inc.

A hearing aid includes a magnetostrictive electroactive (ME) sensor that generates an electrical signal in response to a magnetic field or a mechanical pressure. In various embodiments, the ME sensor is used for cordless charging of a rechargeable battery in the hearing aid by generating an electrical signal in response to a magnetic field generated for power transfer, magnetic sound signal reception, and/or detection of user commands by sensing a magnetic field or a pressure applied to the hearing aid. 1. A hearing aid , comprising:a hearing aid circuit including a microphone, a receiver, and an audio processor coupled between the microphone and the receiver;a rechargeable battery coupled to the hearing aid circuit to power the hearing aid circuit;a magnetostrictive electroactive (ME) sensor configured to generate an power signal in response to a first magnetic field and generate a driving signal in response to a second magnetic field or a pressure; and a battery charging circuit coupled to the rechargeable battery and configured to charge the rechargeable battery using the power signal; and', 'a switch coupled to the hearing aid circuit and configured to control the hearing aid circuit using the driving signal., 'a sensor processing circuit coupled to the ME sensor, the sensor processing circuit including2. The hearing aid of claim 1 , comprising a housing encapsulating at least the hearing aid circuit and the sensor processing circuit.3. The hearing aid of claim 2 , wherein the ME sensor is encapsulated in the housing.4. The hearing aid of claim 2 , wherein the ME sensor is incorporated into the housing.5. The hearing aid of claim 4 , wherein the housing comprises a battery door claim 4 , and the ME sensor is incorporated into the battery door.6. The hearing aid of claim 1 , wherein the ME sensor comprises two magnetostrictive layers and a piezoelectric layer sandwiched between the two magnetostrictive layers.7. The hearing aid of claim 1 , wherein the ME sensor ...

Подробнее
07-03-2019 дата публикации

DEFERRED RESPONSE TO A PREFETCH REQUEST

Номер: US20190073309A1
Принадлежит:

Modifying prefetch request processing. A prefetch request is received by a local computer from a remote computer. The local computer responds to a determination that execution of the prefetch request is predicted to cause an address conflict during an execution of a transaction of the local processor by comparing a priority of the prefetch request with a priority of the transaction. Based on a result of the comparison, the local computer modifies program instructions that govern execution of the program instructions included in the prefetch request to include program instruction to perform one or both of: (i) a quiesce of the prefetch request prior to execution of the prefetch request, and (ii) a delay in execution of the prefetch request for a predetermined delay period. 1. A method to modify prefetch request processing , the method comprising:receiving, by a local processor of a group of one or more processors of a local computer, a prefetch request from a remote processor of a remote computer that is in communication with the local computer;prior to execution of program instructions included in the prefetch request, determining, by the group of one or more processors, whether execution of the prefetch request is predicted to cause an address conflict by accessing a memory address that is accessed during an execution of a transaction of the local processor;responsive to a determination that execution of the prefetch request is predicted to cause a conflict during the execution of the transaction, comparing, by the group of one or more processors, a priority of the prefetch request with a priority of the transaction; andbased, at least in part, on a result of the comparison, modifying, by the group of one or more processors, one or more program instructions that govern execution of the program instructions included in the prefetch request to include program instruction to perform one or both of: (i) a quiesce of the prefetch request prior to execution of the ...

Подробнее
14-03-2019 дата публикации

PREFETCH INSENSITIVE TRANSACTIONAL MEMORY

Номер: US20190079858A1
Принадлежит:

Processing prefetch memory operations and transactions. A local processor receives a write prefetch request from a remote processor. Prior to execution of a write prefetch request received from a remote processor, determining whether a priority of the write prefetch request is greater than a priority of a pending transaction of a local processor. The write prefetch request is executed in response to a determination that the priority of the write prefetch request is greater than the priority of a pending transaction. Prefetch data produced by execution of the write prefetch request is provided to the remote processor. 1. A method of processing prefetch memory operations and transactions , the method comprising: prior to execution of a write prefetch request received from a remote processor, determining, by one or more processors, whether a priority of the write prefetch request is greater than a priority of a pending transaction of a local processor;', 'executing, by the one or more processors, the write prefetch request in response to a determination that the priority of the write prefetch request is greater than the priority of the pending transaction; and', 'providing, by the one or more processors, a prefetch data associated with the executed write prefetch request to the remote processor., 'during a first time period2. The method of claim 1 , the method further comprising:prior to execution of the write prefetch request, determining, by the group of one or more processors, that the write prefetch request conflicts with the pending transaction of the local processor based on a comparison of memory addresses that are accessed by the write prefetch request to memory addresses that are accessed by the pending transaction.3. The method of claim 1 , wherein the write prefetch request includes program instructions to move data from a first memory to a second memory based on an anticipated access of the data by the remote processor claim 1 , wherein the second memory ...

Подробнее
23-03-2017 дата публикации

TRANSACTIONAL MEMORY COHERENCE CONTROL

Номер: US20170083361A1
Принадлежит:

A computer-implemented method includes, in a transactional memory environment comprising a plurality of processors, identifying one or more selected processors and identifying one or more coherence privilege state indicators. The one or more coherence privilege state indicators are associated with the one or more selected processors. A coherence privilege behavioral pattern is determined based on the one or more coherence privilege state indicators. A corresponding computer program product and computer system are also disclosed. 1. A computer-implemented method comprising , in a transactional memory environment comprising a plurality of processors:identifying one or more selected processors;identifying one or more coherence privilege state indicators, said coherence privilege state indicators being associated with said one or more selected processors; anddetermining a coherence privilege behavioral pattern based on said one or more coherence privilege state indicators.2. The computer-implemented method of claim 1 , further comprising:receiving a coherence conflict indication, said coherence conflict indication caused to be sent by one or more requesting parties;determining a coherence response based on said coherence privilege behavioral pattern; andcommunicating a coherence response indication comprising said coherence response to a recipient group comprising said one or more requesting parties.3. The computer-implemented method of claim 2 , wherein:said coherence conflict indication comprises one or more identifying markers; andsaid coherence response indication comprises an indication of said one or more identifying markers.4. The computer-implemented method of claim 1 , further comprising:receiving a coherence privilege modification indication, said coherence privilege modification indication denoting a request for change in said one or more coherence privilege state indicators; anddetermining whether to approve said coherence privilege modification indication ...

Подробнее
23-03-2017 дата публикации

TRANSACTIONAL MEMORY COHERENCE CONTROL

Номер: US20170083442A1
Принадлежит:

A computer-implemented method includes, in a transactional memory environment comprising a plurality of processors, identifying one or more selected processors and identifying one or more coherence privilege state indicators. The one or more coherence privilege state indicators are associated with the one or more selected processors. A coherence privilege behavioral pattern is determined based on the one or more coherence privilege state indicators. A corresponding computer program product and computer system are also disclosed. 17.-. (canceled)8. A computer program product , the computer program product comprising:a processing circuit;one or more computer readable storage media, said computer readable storage media being readable by said processing circuit; identifying one or more selected processors;', 'identifying one or more coherence privilege state indicators, said coherence privilege state indicators being associated with said one or more selected processors; and', 'determining a coherence privilege behavioral pattern based on said one or more coherence privilege state indicators., 'wherein said computer readable storage media store instructions for execution by said processing circuit for performing a method comprising, in a transactional memory environment comprising a plurality of processors9. The computer program product of claim 8 , wherein said method further comprises:receiving a coherence conflict indication, said coherence conflict indication caused to be sent by one or more requesting parties;determining a coherence response based on said coherence privilege behavioral pattern; andcommunicating a coherence response indication comprising said coherence response to a recipient group comprising said one or more requesting parties.10. The computer program product of claim 9 , wherein:said coherence conflict indication comprises one or more identifying markers; andsaid coherence response indication comprises an indication of said one or more identifying ...

Подробнее
19-06-2014 дата публикации

HEARING ASSISTANCE DEVICE VENT VALVE

Номер: US20140169603A1
Принадлежит: Starkey Laboratories, Inc.

Techniques are disclosed for actuating a valve of a hearing assistance device. In one example, a hearing assistance device comprises a device housing defining a vent structure, a vent valve positioned within the vent, the vent valve having first and second states. The vent valve comprises a magnet, a disk configured to move about an axis, and a magnetic catch. The hearing assistance device further comprises an actuator, and a processor configured to provide at least one signal to the actuator to cause the disk to move to controllably adjust the vent structure. 1. A hearing assistance device for providing sound to an ear canal of a user , comprising:a device housing configured to be worn at least partially in the ear canal of the user, the device housing defining a vent structure extending from a first portion of the housing to a second portion of the housing to provide an acoustic path for sounds to pass through the device; a magnet having a magnetic field;', 'a disk configured to move about an axis; and', 'a magnetic catch configured to apply a force to the disk to hold the disk in at least one of the first state and the second state;, 'a vent valve positioned within at least a portion of the vent structure, the vent valve having at least a first state and a second state, the vent valve comprisingan actuator; anda processor configured to provide at least one signal to the actuator to cause the disk to move to controllably adjust the vent structure.2. The hearing assistance device of claim 1 , wherein the magnetic catch is a magnetically permeable material that is positioned at least partially within an interior of the valve housing.3. The hearing assistance device of claim 1 , wherein the actuator is an electroactive polymer.4. The hearing assistance device of claim 1 , wherein the actuator is a shape memory alloy.5. The hearing assistance device of claim 1 , wherein the actuator is a piezoelectric element.6. The hearing assistance device of claim 1 , wherein the ...

Подробнее
21-03-2019 дата публикации

PREFETCH INSENSITIVE TRANSACTIONAL MEMORY

Номер: US20190087317A1
Принадлежит:

Processing prefetch memory operations and transactions. A local processor receives a prefetch request from a remote processor. Prior to execution of the prefetch request, determining whether a priority of the remote processor is greater than a priority of a local processor. The write prefetch request is executed in response to a to a determination that the priority of the remote processor is greater than the priority of the local processor. Prefetch data produced by execution of the prefetch request is provided to the remote processor. 1. A method of processing prefetch memory operations and transactions , the method comprising: prior to execution of a prefetch request received from a remote processor, determining whether a priority of the remote processor is greater than a priority of a local processor;', 'executing the prefetch request in response to a determination that the priority of the remote processor is greater than the priority of the local processor; and', 'providing a prefetch data associated with the executed prefetch request to the remote processor., 'during a first time period2. The method of claim 1 , the method further comprising:prior to execution of the prefetch request, determining, by the group of one or more processors, whether the prefetch request conflicts with a transaction of the local processor based on a comparison of (i) reads and writes of the prefetch request to (ii) reads and writes of the transaction.3. The method of claim 1 , wherein the prefetch request is at least one of (i) a read prefetch request and (ii) a write prefetch request that includes program instructions to move data from a first memory to a second memory based on an anticipated access of the data by the remote processor claim 1 , wherein the second memory has a lower memory level relative to the remote processor when compared with a memory level of the first memory relative to the remote processor.4. The method of claim 1 , the method further comprising: 'responsive ...

Подробнее
30-03-2017 дата публикации

CONDITIONAL STACK FRAME ALLOCATION

Номер: US20170090812A1
Принадлежит:

A method for allocating memory includes an operation that determines whether a prototype of a callee function is within a scope of a caller. The caller is a module containing a function call to the callee function. In addition, the method includes determining whether the function call includes one or more unnamed parameters when a prototype of the callee function is within the scope of the caller. Further, the method may include inserting instructions in the caller to allocate a register save area in a memory when it is determined that the function call includes one or more unnamed parameters. 1. A computer-implemented method for allocating memory , comprising:determining whether a prototype of a callee function is within a scope of a caller, the caller being a module containing a function call to the callee function;when a prototype of the callee function is within the scope of the caller, determining whether the function call includes one or more unnamed parameters; andinserting instructions in the caller to allocate a register save area in a memory when it is determined that the function call includes one or more unnamed parameters.2. The method of claim 1 , wherein the inserting of instructions in the caller to allocate a register save area in a memory when it is determined that the function call includes one or more unnamed parameters further comprises:determining whether all parameters of the function can be passed in registers; andinserting instructions in the caller to allocate a parameter overflow area in the memory when all parameters of the function call cannot be passed in registers.3. The method of claim 1 , further comprising:determining whether all parameters of the function can be passed in registers when it is determined that the function call only includes named parameters; andomitting instructions in the caller to allocate a register save area in a memory when all parameters of the function call can be passed in registers.4. The method of claim 1 ...

Подробнее
30-03-2017 дата публикации

DYNAMIC RELEASING OF CACHE LINES

Номер: US20170090977A1
Принадлежит:

A computer-implemented method includes, in a transactional memory environment, identifying a transaction and identifying one or more cache lines. The cache lines are allocated to the transaction. A cache line record is stored. The cache line record includes a reference to the one or more cache lines. An indication is received. The indication denotes a request to demote the one or more cache lines. The cache line record is retrieved, and the one or more cache lines are released. A corresponding computer program product and computer system are also disclosed. 1. A computer-implemented method comprising , in a transactional memory environment:identifying a transaction;identifying one or more cache lines, said one or more cache lines being allocated to said transaction;storing a cache line record, said cache line record comprising a reference to said one or more cache lines;receiving an indication, said indication denoting a request to demote said one or more cache lines;retrieving said cache line record; andreleasing said one or more cache lines.2. The computer-implemented method of claim 1 , wherein said indication is provided by at least one element selected from the group consisting of:one or more machine-level instructions to a computer hardware component;one or more values in one or more computer control registers; anddetection of one or more conflict conditions by a contention management policy.3. The computer-implemented method of claim 1 , wherein:storing said cache line record comprises storing a reference to load cache lines in a level one cache; andretrieving said cache line record comprises accessing said level one cache.4. The computer-implemented method of claim 1 , wherein:storing said cache line record comprises storing a reference to store cache lines in a store buffer; andretrieving said cache line record comprises accessing said store buffer.5. The computer-implemented method of claim 1 , wherein:storing said cache line record comprises storing a ...

Подробнее
30-03-2017 дата публикации

MEMORY ACCESS REQUEST FOR A MEMORY PROTOCOL

Номер: US20170090978A1
Принадлежит:

A computer-implemented method includes identifying two or more memory locations and referencing, by a memory access request, the two or more memory locations. The memory access request is a single action pursuant to a memory protocol. The computer-implemented method further includes sending the memory access request from one or more processors to a node and fetching, by the node, data content from each of the two or more memory locations. The computer-implemented method further includes packaging, by the node, the data content from each of the two or more memory locations into a memory package, and returning the memory package from the node to the one or more processors. A corresponding computer program product and computer system are also disclosed. 1. A computer-implemented method comprising:identifying two or more memory locations;referencing, by a memory access request, said two or more memory locations;sending said memory access request from one or more processors to a node, said memory access request comprising a single action pursuant to a memory protocol, wherein said node is configured for managing requests under said memory protocol;fetching, by said node, data content from each of said two or more memory locations;packaging, by said node, said data content from each of said two or more memory locations into a memory package;returning said memory package from said node to said one or more processors; andinitiating a transaction in a transactional memory environment, wherein said two or more memory locations are used in said transaction.2. The computer-implemented method of claim 1 , wherein said node is of at least one type selected from the group consisting of:(a) one or more additional processors;(b) a memory controller; and(c) a cache controller.3. The computer-implemented method of claim 1 , wherein:said one or more processors communicate with said node via a system bus;said one or more processors are in electronic communication with at least one of a ...

Подробнее
30-03-2017 дата публикации

CONDITIONAL STACK FRAME ALLOCATION

Номер: US20170091088A1
Принадлежит:

A method for allocating memory includes an operation that determines whether a prototype of a callee function is within a scope of a caller. The caller is a module containing a function call to the callee function. In addition, the method includes determining whether the function call includes one or more unnamed parameters when a prototype of the callee function is within the scope of the caller. Further, the method may include inserting instructions in the caller to allocate a register save area in a memory when it is determined that the function call includes one or more unnamed parameters. 1. A system for allocating memory , comprising:a processor; anda memory to store a compiler and one or more modules, the compiler being comprised of instructions that when executed by the processor, cause the processor to perform the following operations:determining whether a prototype of a callee function is within a scope of a caller, the caller being a module containing a function call to the callee function;when a prototype of the callee function is within the scope of the caller, determining whether the function call includes one or more unnamed parameters; andinserting instructions in the caller to allocate a register save area in a memory when it is determined that the function call includes one or more unnamed parameters.2. The system of claim 1 , wherein the inserting of instructions in the caller to allocate a register save area in a memory when it is determined that the function call includes one or more unnamed parameters further comprises:determining whether all parameters of the function can be passed in registers; andinserting instructions in the caller to allocate a parameter overflow area in the memory when all parameters of the function call cannot be passed in registers.3. The system of claim 1 , further comprising:determining whether all parameters of the function can be passed in registers when it is determined that the function call only includes named ...

Подробнее
30-03-2017 дата публикации

DYNAMIC RELEASING OF CACHE LINES

Номер: US20170091091A1
Принадлежит:

A computer-implemented method includes, in a transactional memory environment, identifying a transaction and identifying one or more cache lines. The cache lines are allocated to the transaction. A cache line record is stored. The cache line record includes a reference to the one or more cache lines. An indication is received. The indication denotes a request to demote the one or more cache lines. The cache line record is retrieved, and the one or more cache lines are released. A corresponding computer program product and computer system are also disclosed. 17.-. (canceled)8. A computer program product , the computer program product comprising one or more computer readable storage media and program instructions stored on said one or more computer readable storage media , said program instructions comprising instructions to perform a method comprising , in a transactional memory environment:identifying a transaction;identifying one or more cache lines, said one or more cache lines being allocated to said transaction;storing a cache line record, said cache line record comprising a reference to said one or more cache lines;receiving an indication, said indication denoting a request to demote said one or more cache lines;retrieving said cache line record; andreleasing said one or more cache lines.9. The computer program product of claim 8 , wherein said indication is provided by at least one element selected from the group consisting of:one or more machine-level instructions to a computer hardware component;one or more values in one or more computer control registers; anddetection of one or more conflict conditions by a contention management policy.10. The computer program product of claim 8 , wherein:storing said cache line record comprises storing a reference to load cache lines in a level one cache; andretrieving said cache line record comprises accessing said level one cache.11. The computer program product of claim 8 , wherein:storing said cache line record ...

Подробнее
05-04-2018 дата публикации

MULTI-SECTION GARBAGE COLLECTION

Номер: US20180095874A1
Принадлежит:

A method and apparatus for garbage collection is disclosed herein. The method includes performing a garbage collection process without pausing execution of a runtime environment. The method also includes executing a first CPU instruction to load a first pointer that points to a first location in a first region of memory, where the first region of memory is undergoing garbage collection. The method also includes moving a first object pointed to by the first pointer from the first location in memory to a second location in memory. 1. A method comprising:performing a garbage collection process without pausing execution of a runtime environment;executing a first CPU instruction to load a first pointer that points to a first location in a first region of memory, wherein the first region of memory is undergoing garbage collection; andmoving a first object pointed to by the first pointer from the first location in memory to a second location in memory.2. The method of claim 1 , the method further comprising:specifying a load-monitored region within the memory, wherein the load-monitored region of memory is currently undergoing garbage collection.3. The method of claim 1 , further comprising:updating a table that tracks the movement of the first object pointed to by the first pointer from the first location in memory to a second location in memory.4. The method of claim 2 , further comprising:loading the specified load-monitored region into a load-monitored region register.5. The method of claim 4 , further comprising:enabling a section of the specified load-monitored region into a load-monitored section enable register.6. The method of claim 1 , wherein moving the first object comprises:copying the first object from the first location in memory to the second location in memory and updating the first pointer to point to the first object in the second location of memory.7. The method of claim 2 , further comprising:executing a second CPU instruction to load a second pointer ...

Подробнее
06-04-2017 дата публикации

HYBRID SHELL FOR HEARING AID

Номер: US20170099553A1
Принадлежит:

A method is a described for constructing a hearing aid shell that comprises a combination of hard and soft materials. In one embodiment, 3D printing is combined with conventional mold/casting methods so that a first shell portion made of a hard material and a mold for a second shell portion are 3D printed. The mold is then filled with a soft material which is allowed to set to form the second shell portion, and the first and second shell portions are adhesively attached. 1. A method for constructing a hearing aid shell , comprising:3D printing a first shell portion made of a hard material;3D printing a mold for a second shell portion;filling the mold with a soft material which is allowed to set to form the second shell portion; and,adhesively attaching the first and second shell portions.2. The method of wherein the soft material is silicone.3. The method of wherein the soft material is transparent silicone.4. The method of further comprising 3D printing alignment features to assure that the first and second shell portions fit together.5. The method of further comprising 3D printing textured surfaces on the surfaces of the first and second shell portions that are adhesively attached.6. The method of wherein the textured surfaces of the first and second shell portions comprise interlocking portions that increase the surface area of contact.7. The method of wherein the textured surfaces of the first and second shell portions comprise rough and irregular portions that increase the surface area of contact.8. The method of wherein the textured surfaces of the first and second shell portions comprise overlapping portions that increase the surface area of contact.9. The method of further comprising disposing one or more acoustic seal rings around a portion of the hearing aid shell that is adapted to be inserted into a patient's external ear canal.10. The method of further comprising adhering the first shell portion to the soft material in the mold and claim 1 , after ...

Подробнее
13-04-2017 дата публикации

DIRECT APPLICATION-LEVEL CONTROL OF MULTIPLE ASYNCHRONOUS EVENTS

Номер: US20170102964A1
Принадлежит:

Methods for enabling an application-level direct control of multiple facilities are disclosed herein. In one embodiment, the method includes reading, by operation of an application-level handler, a register configured to store status information and control information associated with a plurality of facilities, wherein a facility is a process running independently from a processor, determining an order of priority for events in the register based on the status information and control information of the multiple facilities, and processing the events in the order of priority such that an application can directly control the multiple facilities simultaneously. 1. A method , comprising:reading, by operation of an application-level handler, a register configured to store status information and control information associated with a plurality of facilities, wherein the status information indicates which facility of the plurality of facilities triggered an exception and the control information indicates whether additional exceptions can occur for the facility until an event that triggered the exception is handled, wherein a facility is a hardware unit running independently from a processor;determining an order of priority for events in the register based on the status information and control information of the multiple facilities; andprocessing the events in the order of priority such that an application can directly control the multiple facilities simultaneously.2. The method of claim 1 , further comprising:storing, in a second register, an address of the application-level handler; and modifying status information stored in the register;', 'loading the address of the application-level handler from the second register; and', 'transferring control to the application-level handler., 'in response to an event-based exception3. The method of claim 1 , wherein executing a handler to read a register configured to store status information and control information pertaining to ...

Подробнее
13-04-2017 дата публикации

DIRECT APPLICATION-LEVEL CONTROL OF MULTIPLE ASYNCHRONOUS EVENTS

Номер: US20170102973A1
Принадлежит:

Apparatus for enabling application-level direct control of multiple facilities are disclosed herein. In one embodiment, a processor comprising a plurality of facilities comprised of hardware units that run independently from the processor; and, a register configured to store status information and control information associated with the plurality of facilities. The processor is configured to perform an operation that includes reading, by operation of an application-level handler, the register, determining an order of priority for events in the register based on the status information and control information of the multiple facilities, and processing the events in the order of priority such that an application can directly control the multiple facilities simultaneously. 17.-. (canceled)8. A system , comprising: a plurality of facilities comprised of hardware units that run independently from the processor;', 'a register configured to store status information and control information associated with the plurality of facilities; and, 'a processor, comprising reading, by operation of an application-level handler, the register configured to store status information and control information associated with the plurality of facilities, wherein the status information indicates which facility of the plurality of facilities triggered an exception and the control information indicates whether additional exceptions can occur for the facility until an event that triggered the exception is handled;', 'determining an order of priority for events in the register based on the status information and control information of the multiple facilities;', 'processing the events in the order of priority such that an application can directly control the multiple facilities simultaneously., 'a memory storing program code, which when executed on the processor performs an operation for enabling an application direct control of the facilities, the operation comprising9. The system of claim 8 , ...

Подробнее
23-04-2015 дата публикации

INPUT STAGE HEADROOM EXPANSION FOR HEARING ASSISTANCE DEVICES

Номер: US20150110312A1
Принадлежит: Starkey Laboratories, Inc.

Disclosed herein, among other things, are systems and methods for input stage headroom expansion for hearing assistance devices. One aspect of the present subject matter includes a hearing assistance device. According to various embodiments, the hearing assistance device includes an input stage including a microphone configured with variable sensitivity, and hearing assistance electronics connected to the microphone. The hearing assistance electronics are configured to process a signal received by the microphone for hearing assistance for a wearer of the hearing assistance device, in an embodiment. A receiver is connected to the hearing assistance electronics and configured to output the processed signal to the user, in various embodiments. According to various embodiments, the hearing assistance electronics are configured to dynamically change the sensitivity of the microphone to change headroom of the input stage. 1. A method , comprising:sensing input sound pressure level for a hearing assistance device;dynamically changing sensitivity of a microphone of the hearing assistance device to change headroom of an input stage of the hearing assistance device based on the sensed input sound pressure level.2. The method of claim 1 , wherein dynamically changing sensitivity of a microphone includes changing a bias voltage of the microphone.3. The method of claim 1 , wherein changing sensitivity includes decreasing sensitivity to increase the headroom.4. The method of claim 1 , wherein changing sensitivity includes increasing sensitivity to decrease the headroom.5. The method of claim 1 , wherein changing sensitivity includes using a predetermined increment level hardcoded in the microphone.6. The method of claim 1 , wherein changing sensitivity includes specifying an increment level.7. The method of claim 1 , wherein changing sensitivity includes switching between a maximum and a minimum sensitivity value.8. The method of claim 7 , wherein switching between a maximum and ...

Подробнее
23-04-2015 дата публикации

METHOD AND APPARATUS FOR BEHIND-THE-EAR HEARING AID WITH CAPACITIVE SENSOR

Номер: US20150110323A1
Автор: Sacha Michael Karl
Принадлежит:

Disclosed herein, among other things, are methods and apparatus for a behind-the-ear hearing aid with a capacitive sensor. 1a behind-the-ear housing having an outer surface;hearing assistance electronics;capacitive sensing electronics connected to the hearing assistance circuit; anda plurality of electrodes placed on or near the outer surface of the housing and connected to the capacitive sensing circuit,wherein the capacitive sensing electronics are adapted to detect motion of the wearer in proximity of the plurality of electrodes.. An apparatus for use by a wearer, comprising: The present application is a continuation of and claims the benefit of priority to U.S. patent application Ser. No. 12/905,444, filed on Oct. 15, 2010, which application claims the benefit of priority under 35 USC 119(e) to U.S. Provisional Patent Application Serial No. 61/252,639 filed on Oct. 17, 2009, and claims the benefit of priority under 35 USC 119(e) to U.S. Provisional Patent Application Ser. No. 61/253,358 filed on Oct. 20, 2009; all of which are incorporated herein by reference in their entirety.The present subject matter relates generally to hearing aids, and in particular to an behind-the-ear hearing aid with capacitive sensor.The smaller a hearing aid becomes, the more difficult it can be to put in the ear, take out of the ear, and to operate. Even simple switching of the device becomes more difficult as the device becomes smaller. The controls on a behind-the-ear hearing aid (BTE hearing aid) can be difficult to access and to operate.Thus, there is a need in the art for a system for improved controls for hearing aids. There is a need in the art for improved controls for behind-the-ear hearing aids.Disclosed herein, among other things, are methods and apparatus for a behind-the-ear hearing aid with a capacitive sensor. In various embodiments, the present subject matter includes apparatus for use by a wearer, including: a behind-the-ear housing having an outer surface; hearing ...

Подробнее
11-04-2019 дата публикации

COMPILER OPTIMIZATIONS FOR VECTOR OPERATIONS THAT ARE REFORMATTING-RESISTANT

Номер: US20190108005A1
Принадлежит:

An optimizing compiler includes a vector optimization mechanism that optimizes vector operations that are reformatting-resistant, such as source instructions that do not have a corresponding reformatting operation, sink instructions that do not have a corresponding reformatting operation, a source instruction that is a scalar value, a sink instruction that may produce a scalar value, and an internal operation that depends on lanes being in a specified order. The ability to optimize vector instructions that are reformatting-resistant reduces the number of operations to improve the run-time performance of the code. 1. An apparatus comprising:at least one processor;a memory coupled to the at least one processor;a computer program residing in the memory, the computer program including a plurality of instructions that includes at least one vector operation and that includes a plurality of reformatting-resistant vector operations that comprises a source instruction that operates on a scalar value; anda compiler residing in the memory and executed by the at least one processor, the compiler including a vector instruction optimization mechanism that optimizes at least one of the plurality of reformatting-resistant vector operations in the computer program to enhance run-time performance of the computer program.2. The apparatus of wherein the plurality of reformatting-resistant vector operations comprises a source instruction without a corresponding reformatting operation.3. The apparatus of wherein the plurality of reformatting-resistant vector operations comprises a sink instruction that can produce a scalar value.4. The apparatus of wherein the plurality of reformatting-resistant vector operations comprises an internal operation that depends on lanes being in a specified order.5. The apparatus of wherein the vector instruction optimization mechanism analyzes an existing code portion in the computer program claim 1 , determines a proposed change to the existing code ...

Подробнее
28-04-2016 дата публикации

LINKING A FUNCTION WITH DUAL ENTRY POINTS

Номер: US20160117181A1
Принадлежит:

A method for a static linker to resolve a function call can include identifying, during link time, a first function call of a calling function to a callee function, determining whether the callee function is a local function, determining whether the callee function has a plurality of entry points, and whether an entry point of the plurality of entry points is a local entry point. The method can include resolving, during link time, the first function call to enter the local entry point, which can include replacing a symbol for the function in the first function call with an address of the local entry point during link time. If the callee function cannot be determined to be a local function, the method can include generating stub code and directing the first function call to enter the stub code during link time. 1. A computer-implemented method for a static linker to resolve a function call , comprising:identifying, during link time, a first function call of a calling function to a callee function;determining whether the callee function is a local function;in response to determining that the callee function is a local function, determining whether the callee function has a plurality of entry points and whether an entry point of the plurality of entry points is a local entry point, wherein the determining of whether the callee function has a plurality of entry points and whether an entry point of the plurality of entry points is a local entry point includes accessing symbol information from a symbol table, the symbol table indicating a number of entry points and whether an entry point is a local entry point; andresolving, during link time, the first function call to enter the local entry point when it is determined that the callee function has a plurality of entry points and an entry point of the plurality of entry points is the local entry point,wherein the resolving of the first function call to enter the local entry point includes replacing a symbol for the callee ...

Подробнее
28-04-2016 дата публикации

LINKING A FUNCTION WITH DUAL ENTRY POINTS

Номер: US20160117201A1
Принадлежит:

A method for a static linker to resolve a function call can include identifying, during link time, a first function call of a calling function to a callee function, determining whether the callee function is a local function, determining whether the callee function has a plurality of entry points, and whether an entry point of the plurality of entry points is a local entry point. The method can include resolving, during link time, the first function call to enter the local entry point, which can include replacing a symbol for the function in the first function call with an address of the local entry point during link time. If the callee function cannot be determined to be a local function, the method can include generating stub code and directing the first function call to enter the stub code during link time. 19-. (canceled)10. A computer program product for a static linker to resolve a function call , the computer program product comprising a computer readable storage medium having program instructions embodied therewith , the program instructions readable by a computer processor to cause the computer processor to:identify, during link time, a first function call of a calling function to a callee function;determine whether the callee function is a local function;in response to determining that the callee function is a local function, determine whether the callee function has a plurality of entry points and whether an entry point of the plurality of entry points is a local entry point, the determining of whether the callee function has a plurality of entry points and whether an entry point of the plurality of entry points is a local entry point including accessing symbol information from a symbol table, the symbol table indicating a number of entry points and whether an entry point is a local entry point; andresolve, during link time, the first function call to enter the local entry point when it is determined that the callee function has a plurality of entry ...

Подробнее
26-04-2018 дата публикации

Local Function Call Tailoring for Function Pointer Calls

Номер: US20180113685A1

Embodiments relate to using a local entry point with an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler is provided to identify potential target functions and indicate the potential target functions in the program code. A linker can read the indication the compiler made in the program code. The linker optimizes an indirect call site if the potential target functions are defined in the same module. 1. A computer system comprising:a memory;a processor, communicatively coupled to the memory;an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; and determine all potential target functions of the indirect function call; and', 'indicate, in the generated program code, the determined potential target functions of the indirect function call including annotate a call site of the indirect function call., 'a compiler in communication with the processor, the compiler to generate program code for an application module selected from the group consisting of: the first application module, the second application module, the third application module and a fourth application module, the generation of program code including2. The system of claim 1 , wherein the determination of all potential target functions of the indirect function call further comprises the compiler to:identify a function pointer associated with the indirect function call;compute a transitive closure of a reaching definition of the ...

Подробнее
26-04-2018 дата публикации

Comparisons in Function Pointer Localization

Номер: US20180113687A1
Принадлежит: International Business Machines Corp

Embodiments relate to using a local entry point with an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler is provided to determine and indicate in the program code that the function pointer value resulting from a non-call reference of a function symbol is solely used to perform indirect calls in the same module or comparisons against function pointers. A linker or loader can read the indication the compiler made in the program code. The linker or loader use the local entry point associated with the target function if the target function is defined in the same module as the reference and is local-use-only.

Подробнее
26-04-2018 дата публикации

Compiling Optimized Entry Points for Local-Use-Only Function Pointers

Номер: US20180113688A1

Embodiments relate to using a local entry point with an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler is provided to determine and indicate in the program code that the function pointer value resulting from a non-call reference of a function symbol is solely used to perform indirect calls in the same module, e.g. local-use-only. A linker or loader can read the indication the compiler made in the program code. The linker or loader use the local entry point associated with the target function if the target function is defined in the same module as the reference and is local-use-only. 1. A computer system comprising:a memory;a processor, communicatively coupled to the memory;an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; load a first address of a function by using a first symbolic reference;', 'determine that an employed value of the first symbolic reference is used exclusively to perform the indirect function call in the application module; and', 'indicate, in the generated program code, that the first symbolic reference can be resolved using a local entry point address of the function., 'a compiler in communication with the processor, the compiler to generate program code for an application module selected from the group consisting of the: the first application module, the second application module, the third application module, and a fourth application module, including2. The ...

Подробнее
26-04-2018 дата публикации

Local Function Call Site Optimization

Номер: US20180113689A1

Embodiments relate to using a local entry point with an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler is provided to identify potential target functions and indicate the potential target functions in the program code. A linker can read the indication the compiler made in the program code. The linker optimizes an indirect call site if the potential target functions are defined in the same module. 1. A computer system comprising:a memory;a processor, communicatively coupled to the memory;an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; and determine a program code of the generated application module indicates one or more potential target functions associated with the indirect function call;', 'determine the one or more potential target functions are defined in the generated application module; and', 'rewrite the indirect function call of the one or more potential target functions., 'a linker in communication with the processor, the linker to generate an application module, including optimization of the indirect function call, the generated application module selected from the group consisting of: the first application module, the second application module, the third application module and a fourth application module, the optimization including2. The system of claim 1 , wherein the rewrite of the indirect function call further comprises the linker to change program code at the ...

Подробнее
26-04-2018 дата публикации

Optimized Entry Points and Local Function Call Tailoring for Function Pointers

Номер: US20180113690A1

Embodiments relate to optimizing an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler is provided to identify potential target functions and indicate the potential target functions in the program code. Additionally, the compiler determines and indicates in the program code that the function pointer value resulting from a non-call reference of a function symbol is solely used to perform indirect calls in the same module. A linker can read the indication the compiler made in the program code and optimize the indirect call function. 1. A computer system comprising:a memory;a processor, communicatively coupled to the memory;an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; [ determine all potential target functions of the indirect function call; and', 'indicate, in program code, the determined potential target functions of the indirect function call including annotate a call site of the indirect function call; and, 'perform an indirect call site optimization including, create a set with at least two symbolic references;', 'load addresses of the at least one potential target function by using the at least two symbolic references contained in the set;', 'determine that employed values of the at least two symbolic references are used to perform an operation selected from the group consisting of: the indirect function call in the first application module, a comparison to at least one ...

Подробнее
26-04-2018 дата публикации

Compiling Optimized Entry Points for Local-Use-Only Function Pointers

Номер: US20180113691A1

Embodiments relate to using a local entry point with an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler determines and indicates, in the program code, that the function pointer value resulting from a non-call reference of a function symbol is solely used to perform indirect calls in the same module, e.g. local-use-only. A linker or loader can read the indication the compiler made in the program code. The linker or loader use the local entry point associated with the target function if the target function is defined in the same module as the reference and is local-use-only. 1. A method for resolving a function address comprising:configuring an indirect function call configuration, the configuration including defining a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call;loading a first address of a function by using a first symbolic reference;determining that an employed value of first symbolic reference is used exclusively to perform the indirect function call in the first application module; andindicating, in program code, that the symbolic reference can be resolved using a local entry point address of the function.2. The method of claim 1 , wherein the first symbolic reference can be resolved by a linker or loader.3. The method of claim 1 , further comprising indicating the first symbolic reference can be resolved to a linker or a loader.4. The method of claim 1 , further comprising assessing a location holding a function pointer value claim 1 , ...

Подробнее
26-04-2018 дата публикации

Linking Optimized Entry Points for Local-Use-Only Function Pointers

Номер: US20180113692A1
Принадлежит: International Business Machines Corp

Embodiments relate to using a local entry point with an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler determines and indicates, in the program code, that the function pointer value resulting from a non-call reference of a function symbol is solely used to perform indirect calls in the same module, e.g. local-use-only. A linker or loader can read the indication the compiler made in the program code. The linker or loader use the local entry point associated with the target function if the target function is defined in the same module as the reference and is local-use-only.

Подробнее
26-04-2018 дата публикации

Optimized Entry Points and Local Function Call Tailoring for Function Pointers

Номер: US20180113693A1

Embodiments relate to optimizing an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler identifies potential target functions and indicates the potential target functions in the program code. Additionally, the compiler determines and indicates in the program code that the function pointer value resulting from a non-call reference of a function symbol is solely used to perform indirect calls in the same module. A linker can read the indication the compiler made in the program code and optimize the indirect call function. 1. A method comprising:configuring an indirect function call configuration, the configuration including defining a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; determining all potential target functions of the indirect function call; and', 'indicating, in program code, the determined potential target functions of the indirect function call including annotating a call site of the indirect function call; and, 'performing an indirect call site optimization including creating a set with at least two symbolic references;', 'loading addresses of the at least one potential target function by using the at least two symbolic references contained in the set;', 'determining that employed values of the at least two symbolic references are used to perform an operation selected from the group consisting of: the indirect function call in the first application module, a comparison to at least one symbolic reference contained in the set, and a ...

Подробнее
26-04-2018 дата публикации

Linking Optimized Entry Points for Local-Use-Only Function Pointers

Номер: US20180113694A1

Embodiments relate to using a local entry point with an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler is provided to determine and indicate in the program code that the function pointer value resulting from a non-call reference of a function symbol is solely used to perform indirect calls in the same module, e.g. local-use-only. A linker or loader can read the indication the compiler made in the program code. The linker or loader use the local entry point associated with the target function if the target function is defined in the same module as the reference and is local-use-only. 1. A computer system comprising:a memory;a processor, communicatively coupled to the memory;an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; and resolve a symbolic reference to a function address, the resolution includes to:', 'determine that a program code of the generated application module indicates that the symbolic reference can be resolved using a local entry point address of the function if the function is defined in the generated application module;', 'determine that the symbolic reference refers to a function defined in the generated application module; and', 'store a local entry point address of the function into the generated application module., 'a linker in communication with the processor, the linker to generate an application module, the generated application module selected from the ...

Подробнее
26-04-2018 дата публикации

Loading Optimized Local Entry Points for Local-Use-Only Function Pointers

Номер: US20180113695A1

Embodiments relate to using a local entry point with an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler is provided to determine and indicate in the program code that the function pointer value resulting from a non-call reference of a function symbol is solely used to perform indirect calls in the same module, e.g. local-use-only. A linker or loader can read the indication the compiler made in the program code. The linker or loader use the local entry point associated with the target function if the target function is defined in the same module as the reference and is local-use-only. 1. A computer system comprising:a memory;a processor, communicatively coupled to the memory;an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; and 'resolve a symbolic reference to an address of a function, the resolution includes to:', 'a loader in communication with the processor, the loader to load an application module into at least one memory, the loaded application selected from the group consisting of: the first application module, the second application module, the third application module and a fourth application module, the loading including determine a function definition that the symbolic reference refers to and an associated entry point address, and if the definition is in the loaded application module, store a local entry point address of the function into memory; and', 'if the definition ...

Подробнее
26-04-2018 дата публикации

Executing Optimized Local Entry Points

Номер: US20180113696A1

Embodiments relate to using a local entry point with an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler is provided to determine and indicate in the program code that the function pointer value resulting from a non-call reference of a function symbol is solely used to perform indirect calls in the same module, e.g. local-use-only. A linker or loader can read the indication the compiler made in the program code. The linker or loader use the local entry point associated with the target function if the target function is defined in the same module as the reference and is local-use-only. 1. A computer system comprising:a memory;a processor, communicatively coupled to the memory;an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; and to load an address of a function, including load an entry point address of a function, the entry point address selected from the group consisting of: a local entry point address of the function defined in the loaded application module and a global entry point address, and transferring execution to the entry point address; and', 'to perform the indirect function call using the address of the function, the performance of the indirect function call further including to transfer execution to the entry point address selected from the group consisting of: the local entry point address of the function defined in the loaded application module and the global entry ...

Подробнее
26-04-2018 дата публикации

Executing Local Function Call Site Optimization

Номер: US20180113697A1

Embodiments relate to using a local entry point with an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler is provided to identify potential target functions and indicate the potential target functions in the program code. A linker can read the indication the compiler made in the program code. The linker optimizes an indirect call site if the potential target functions are defined in the same module. 1. A computer system comprising:a memory;a processor, communicatively coupled to the memory;an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; and load the address of a function, including to skip a program instruction selected from the group consisting of: loading a table of contents pointer register and restoring a table of contents pointer register; and', 'perform the indirect function call using the address of the function, including to skip over a program instruction selected from the group consisting of: loading a table of contents pointer register and restoring a table of contents pointer., 'the processor executing program code from an application module loaded into memory, the application module selected form the group consisting of: the first application module, the second application module, the third application module, and a fourth application module, the executing including2. The system of claim 1 , further comprising a compiler to determine that the address of the function ...

Подробнее
26-04-2018 дата публикации

EXECUTING OPTIMIZED LOCAL ENTRY POINTS AND FUNCTION CALL SITES

Номер: US20180113698A1

Embodiments relate to optimizing an indirect call function. More specifically, an indirect call function configuration comprises a first application module having a target function of the indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate an indirect function call. A compiler is provided to identify potential target functions and indicate the potential target functions in the program code. Additionally, the compiler determines and indicates in the program code that the function pointer value resulting from a non-call reference of a function symbol is solely used to perform indirect calls in the same module. A linker can read the indication the compiler made in the program code and optimize the indirect call function. 1. A computer system comprising:a memory;a processor, communicatively coupled to the memory;an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; and load the address of a function, the address selected from the group consisting of: a local entry point address of a function defined in the loaded application module and a global entry point address;', 'transfer execution to the entry point address, including skip a program instruction selected from the group consisting of: loading a table of contents pointer register and restoring a table of contents pointer register; and', 'perform an indirect function call using the address of the function, the performance of the indirect function call further comprising transfer execution to an entry point address, the address selected from the group consisting of: the local entry point address of a function defined in the ...

Подробнее