SYSTEM AND METHOD FOR PROVIDING CONTENT TO A USER BASED ON A PREDICTED ROUTE IDENTIFIED FROM AUDIO OR IMAGES
Реферат: Embodiments are directed towards providing a system that presents content to a user of a vehicle based on where the vehicle is going. A microphone captures audio signals within the vehicle, which are analyzed for route information. These audible commands may be said by a person in the vehicle, such as a passenger telling the driver where to turn, or they may be received from a mobile computing device, such as a smartphone executing a map application that is providing audible directions. An anticipated route of the vehicle is determined based on the audible route information. Content is selected and presented to the user of the vehicle based on the anticipated route. Images of a display screen of the mobile computing device may also be analyzed to identify the route information.
Заявка: 1. A method , comprising:receiving, by a head unit of a vehicle, audible route information captured via a microphone coupled to the vehicle, wherein the audible route information includes at least one direction instructing a driver of the vehicle to turn the vehicle onto a named road, and wherein the audible route information is emanating from a source positioned within the vehicle;determining, by the head unit, an estimated route of the vehicle based on the audible route information;selecting, by the head unit, visual content to display to a user of the vehicle based on the estimated route of the vehicle; anddisplaying, by the head unit, the visual content to the user.2. The method of claim 1 , wherein receiving audible route information includes:capturing, via the microphone, the audible route information from a voice of a person within the vehicle.3. The method of claim 1 , wherein receiving the audible route information includes:receiving audio signals from an electronic speaker of an electronic device providing audible driving directions to the driver of the vehicle.4. The method of claim 1 , wherein receiving audible route information includes:capturing, via the microphone, the audible route information being output from an electronic speaker coupled to the vehicle.5. The method of claim 1 , wherein receiving the audible route information includes;capturing, via the microphone, spoken words that include a direction of travel and street information.6. The method of claim 1 , wherein receiving the audible route information includes;capturing, via the microphone, spoken words that include an instruction to change direction of travel of the vehicle.7. The method of claim 1 , wherein receiving the audible route information includes;capturing, via a microphone within the vehicle, spoken words that include an instruction for the vehicle to leave a current roadway.8. The method of claim 1 , further comprising:receiving, by the head unit, audio data via the microphone;analyzing, by the head unit, the received audio data to identify one or more spoken words;comparing, by the head unit, the one or more identified spoken words with a plurality of known commands; andin response to a match between the one or more identified spoken words and a known command, identifying, by the head unit, the audible route information as the known command.9. The method of claim 1 , wherein determining the estimated route of the vehicle further comprises:receiving, by the head unit, an image of a display screen of a mobile computing device of a person in the vehicle;analyzing, by the head unit, the received image to identify route information on the display screen of the mobile computing device;determining, by the head unit, visual route information commands from the route information; andupdating, by the head unit, the estimated route based on the visual route information.10. The method of claim 1 , wherein determining the estimated route of the vehicle further comprises:determining, via a GPS unit, a current location of the vehicle; anddetermining, by the head unit, the estimated route of the vehicle based on the current location of the vehicle and the audible route information.11. The method of claim 1 , wherein selecting the visual content further comprises:identifying, by the head unit, a current route of the vehicle;selecting, by the head unit, a previous route from a plurality of previous routes that includes a first segment that matches the current route and a second segment that matches the estimated route; andselecting, by the head unit, the content based on the selected previous route without input or interaction from the user.12. A system claim 1 , comprising:a memory configured to store data and computer instructions;a display interface coupled to the memory, the display interface configured to present visual content to a user of a vehicle;a microphone configured to capture audio from within the vehicle; and receive, via the microphone, audible route information that is emanating from a source within the vehicle;', 'determine an anticipated route of the vehicle based on the audible route information;', 'select visual content based on the anticipated route of the vehicle; and', 'displaying, via the display interface, the selected visual content to the user., 'a processor configured to execute the computer instructions to13. The system of claim 12 , further comprising:a camera configured to capture images of a display screen of a mobile computing device of the user; and analyze the captured images to identify visual route information; and', 'determine the anticipated route of the vehicle based on a combination of the audible route information and the visual route information., 'wherein the processor determines the anticipated route of the vehicle by being configured to execute the computer instructions further to14. The system of claim 12 , wherein the processor receives the audible route information emanating from within the vehicle by being configured to execute the computer instructions further to:capture, from a person within the vehicle via the microphone, spoken words that include a direction of travel and street information.15. The system of claim 12 , wherein the processor receives the audible route information emanating from within the vehicle by being configured to execute the computer instructions further to:capture, from a person within the vehicle via the microphone, spoken words that include an instruction to change direction of travel of the vehicle.16. The system of claim 12 , wherein the processor receives the audible route information emanating from within the vehicle by being configured to execute the computer instructions further to:capture, from a person within the vehicle via the microphone, spoken words that include an instruction for the vehicle to leave a current roadway.17. The system of claim 12 , wherein the processor receives the audible route information emanating from within the vehicle by being configured to execute the computer instructions further to:capture, via the microphone, audio signals emanating from an electronic speaker within the vehicle that include spoken words of a direction of travel and street information.18. The system of claim 12 , wherein the processor receives the audible route information emanating from within the vehicle by being configured to execute the computer instructions further to:capture, via the microphone, audio signals emanating from an electronic speaker within the vehicle that include spoken words of an instruction to change direction of travel of the vehicle.19. The system of claim 12 , wherein the processor receives the audible route information emanating from within the vehicle by being configured to execute the computer instructions further to:capture, via the microphone, audio signals emanating from an electronic speaker within the vehicle that include spoken words of an instruction for the vehicle to leave a current roadway.20. A computing device claim 12 , comprising:a memory configured to store computer instructions;a output interface configured to present content to a user of the vehicle;a microphone configured to capture audio emanating from within the vehicle; and capture, via the microphone, audible signals emanating from within the vehicle;', 'analyze the captured audible signals to identify audible route information;', 'determine a next course of travel of the vehicle based on the audible route information;', 'select content to present to the user of the vehicle based on the next course of travel of the vehicle; and', 'present, via the output interface, the selected content to the user., 'a processor configured to execute the computer instructions and
Описание: The present disclosure relates generally to the dynamic, real-time selection and presentation of content to a person in a vehicle based on where the vehicle is expected to be traveling.Automobiles are becoming more and more user-friendly and interactive. Many new cars are now manufactured with a user interface, called a head unit, which a user can use to control various aspects of the automobile and access a variety of content or applications. For example, the user can use the head unit to change radio stations, change the temperature of the automobile cabin, access maps and global positioning systems, access the internet, access other head-unit applications, or access or control other accessories of the automobile. Even though head units offer multiple features to the user, the manufacturers of these devices are constantly striving to incorporate new features into them.Embodiments are directed towards providing a system that presents content to a user of a vehicle based on where the vehicle is going. A microphone captures audio signals within the vehicle, which are analyzed for route information, e.g., “turn left at Main Street,” “take next exit,” etc. These audible commands may be said by a person in the vehicle, such as a passenger telling the driver where to turn, or they may be received from a mobile computing device, such as a smartphone executing a map application that is providing audible directions. An anticipated route of the vehicle is determined based on the audible route information. Content is selected and presented to the user of the vehicle based on the anticipated route. Images of a display screen of the mobile computing device may also be analyzed to identify the route information.In this way, the head unit can obtain route information without the user specifically instructing the head unit to act (such as via a voice command). Moreover, this passive collection of data by the head unit can occur without formally coordinating the communication of route information from the mobile device to the head unit. Route information that is received or gathered without direct interaction with a user may be referred to as indirect route information. Indirect route information differs from conventional route information because there is no intent on the user's part to initiate some action via the head unit (or other computer) to present content related to navigation. As an example, a user may manifest this intent by announcing predetermined audio commands or directing other input actions, like touch-screen interactions, to the head unit.The following description, along with the accompanying drawings, sets forth certain specific details in order to provide a thorough understanding of various disclosed embodiments. However, one skilled in the relevant art will recognize that the disclosed embodiments may be practiced in various combinations, without one or more of these specific details, or with other methods, components, devices, materials, etc. In other instances, well-known structures or components that are associated with the environment of the present disclosure, including but not limited to the communication systems and networks and the automobile environment, have not been shown or described in order to avoid unnecessarily obscuring descriptions of the embodiments. Additionally, the various embodiments may be methods, systems, media, or devices. Accordingly, the various embodiments may be entirely hardware embodiments, entirely software embodiments, or embodiments combining software and hardware aspects.Throughout the specification, claims, and drawings, the following terms take the meaning explicitly associated herein, unless the context clearly dictates otherwise. The term “herein” refers to the specification, claims, and drawings associated with the current application. The phrases “in one embodiment,” “in another embodiment,” “in various embodiments,” “in some embodiments,” “in other embodiments,” and other variations thereof refer to one or more features, structures, functions, limitations, or characteristics of the present disclosure, and are not limited to the same or different embodiments unless the context clearly dictates otherwise. As used herein, the term “or” is an inclusive “or” operator and is equivalent to the phrases “A or B, or both” or “A or B or C, or any combination thereof,” and lists with additional elements are similarly treated. The term “based on” is not exclusive and allows for being based on additional features, functions, aspects, or limitations not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of “a,” “an,” and “the” include singular and plural references.The term “user” is defined as a person or occupant that is in or otherwise being transported by a vehicle. The user may be the driver or a passenger of the vehicle. The term “vehicle” is defined as a device used to transport people or goods (or both), and examples include automobiles, buses, aircraft, boats, or trains. A “processor” is defined as a component with at least some circuitry or other hardware and that can execute instructions. The term “head unit” is defined as a component with at least some circuitry that is part of a vehicle and presents content to a user (as defined above). The term “present” is defined as to bring or introduce to the presence of a user through some sensory interaction. An “output interface” is defined as an interface including at least some circuitry that is configured to present content to a user. A “microphone” is defined as an instrument configured to convert sound waves into one or more corresponding electrical signals.The term “content” is defined as information that can be presented to a user of the vehicle. Content may include visual content, audio content, tactile content, or some combination thereof. Visual content can include text, graphics, symbols, video, or other information that is displayed to a user. Audio content can include songs, vocals, music, chimes, or other types of sounds. Tactile content can include vibrations, pulses, or other types of touch-based sensations provided via a haptic interface. Generalized types of content can include advertisements, sports scores or information, logos, directions, restaurant menus, prices, hours of operation, coupons, descriptive information, emergency instructions, etc.The term “route information” is defined as information related to a travel path (or route) or intended destination of a vehicle. The route information may be a single driving command or instruction, such as a left turn, or it may include a plurality of driving commands or instructions, such as turn right on Jones Avenue then turn left at Sunset Avenue and continue for 0.5 kilometers to arrive at your destination. The route information may include turning details, distances, a destination address, one or more mid-point addresses, street or road or highway identification information, or the like, or any combination thereof. Route information may be presented in audible or visual form. Audible route information may be any audio signal that can be captured by a microphone and used to identify an anticipated route of the vehicle. Visual route information may be any textual, graphical, or visible information on a display screen of a device that can be captured in an image from a camera and used to identify an anticipated route of the vehicle.illustrates a context diagram of a vehicle environment that utilizes audio or video input within the vehicle to identify audible or visual route information that is used to identify an anticipated route to provide content to that user in accordance with embodiments described herein. System includes a vehicle that has a head unit and one or more accessories -The vehicle is virtually any means of transportation that includes a computing device and an output interface to provide content to a user of the vehicle . In the illustrative examples described herein, the computing device of the vehicle is the head unit , although other types of computing devices may be employed. Moreover, examples of vehicles include automobiles, aerial vehicles, water vessels, railroad vehicles, and other modes of transportation.The head unit is a computing device that presents content and other information to users of the vehicle and provides interactive controls and other user interfaces to them. In various embodiments, the head unit utilizes one or more input/output interfaces for user interactions, which may be integrated into the head unit (e.g., input/output interfaces ) or external to the head unit (e.g., other input/output interfaces ). In some embodiments, the input/output interfaces or a portion thereof may be part of or embedded within the head unit . In other embodiments, the other input/output interfaces or a portion thereof may be separate from or independent of the head unit . In various embodiments, the head unit may utilize some combination of the input/output interfaces and the input/output interfaces . For example, the head unit may include a built-in display device to output visual content and utilize a separate speaker that is external to the head unit to output audio content. Any input/output interfaces and input/output interfaces may be collectively referred to as input/output interfaces , .The input/output interfaces , may be configured to receive input from a user of the vehicle or to output content to a user of the vehicle (or both). The input/output interfaces , may include one or more output interfaces, which may include a visual interface, such as a display device; an audio output interface, such as a speaker; a haptic interface, such as a tactile output device; or a combination thereof. Therefore, the input/output interfaces , may include one or more output interfaces configured to output visual content, audio content, tactile content, or some combination thereof. The input/output interfaces , may also include one or more input interfaces, which may include physical or soft buttons, a touchscreen, a microphone, or other input interfaces. Therefore, the input/output interfaces , may include one or more input interfaces configured to receive visual, audio, or physical input commands, or some combination thereof. Embodiments described herein may include the head unit receiving input or providing output through an internal or integrated input/output interface or other external input/output interfaces , or some combination thereof.For ease of illustration, the vehicle is also illustrated as having one or more cameras and one or more microphones . As an example, the camera is configured to capture images of an interior of the vehicle . In one arrangement, the camera can be strategically positioned to focus on an area of the interior where a user may position or has previously positioned a mobile device . For example, a camera may be part of or attached to a rearview mirror (not shown) or a dashboard (not shown) of the vehicle . If a user holds a mobile device in a natural viewing position or places it in a cupholder or some other supporting space, the mobile device may be within the field of vision of the camera . As such, images from the camera may capture content being displayed by the mobile device , and they can be analyzed to determine if the mobile device is displaying visual route information, such as a route highlighted on a map or a route defined by graphical or textual information. (Additional information on this process will be presented below.) In one embodiment, the position or focus of the camera may be adjustable, whether manually or automatically (or both), One or more machine-learning (“ML”) models may assist in determining the accuracy of the positioning or focus setting of the camera in relation to detecting the content displayed by the mobile device , and adjustments can be automatically performed on either parameter (or both). As an alternative, a heuristic approach may be employed to adjust the camera .The microphone is configured to capture audio signals within the vehicle . As an example, the microphone can be built into parts of the vehicle likely to be within audible range of audio generated from interactions between the mobile device and the user or conversations between the user and another person in or near the vehicle. As will be described in more detail below, the audio signals from the mobile device can be analyzed to determine if a user in the vehicle is giving or receiving audible route information or other content, either in relation to the user's interaction with mobile device or a person. In either the case of the camera or the microphone , a mobile device is not the only device capable of serving as a source of the collected data, as any machine with which the user may interact can do so.Although the camera and microphone are illustrated as being separate from the head unit and the other input/output interfaces , embodiments are not so limited. Rather, in some embodiments, one or both of the camera or the microphone may be integrated into the input/output interfaces on the head unit or into the other input/output interfaces . Moreover, although the vehicle is shown as having both the camera and the microphone , embodiments are not so limited. In some embodiments, the vehicle may include the camera but not microphone , although in other embodiments, the vehicle may include the microphone but not camera .As an illustrative example, assume there are a driver and one other passenger in the vehicle . As the driver is driving the vehicle down a road, the passenger may start to give the driver verbal instructions on which way to drive and where to turn. These verbal instructions may be referred to as audible route information. In this instance, the driver has not input a destination into the head unit , so the head unit has not generated a travel route for the vehicle and thus cannot present content associated with the route to the driver or passenger. But it may be beneficial for the head unit to provide content related to the audible route information, such as a route to the destination or alternate routes, traffic information, driving reminders, advertisements for stores or restaurants along the route, etc.Accordingly, the microphone captures the audio signals of the verbal instructions said by the passenger and provides them to the head unit . The head unit relies on various speech-recognition techniques to convert the captured audio signals into textual form. Natural-language-processing algorithms can then process the text to identify the intent and recognize the context of the audible route information. From this procedure, the head unit can then retrieve data related to the audible route information, such as an anticipated route or destination of the vehicle , and present this content to the driver.In this example, head unit may be programmed with the speech-recognition and natural-language-processing algorithms, although some other computer that is part of the vehicle may be responsible for such processing or share the task with the head unit (or some other computer). In addition, the speech recognition or natural-language processing (or both) may be performed by one or more computers, like the remote servers , that are part of a cloud-based center capable of exchanging communications with the vehicle .In addition to the anticipated route, the head unit can present other content to the driver or passenger of the vehicle . Such content can be audible content, visual content, tactile feedback, or a combination thereof and can be presented via one or more input/output interfaces , . For example, the head unit can provide a vibration or audible tone to remind the driver to make a turn as instructed by the passenger. As another example, the head unit can provide an advertisement for gas that is next to the exit where the passenger instructed the driver to leave the current freeway. In yet another example, the head unit can access a database or third-party server, such as remote server , to determine if there is heavy traffic on the anticipated route and if so, select an alternate route to provide to the driver, such as via a graphical map displayed on the head unit .In some other embodiments, the received verbal instructions may be utilized by the head unit to alter a current route or anticipated route. If the user selects a route to a destination or if the head unit identifies an anticipated route based on verbal instructions from a passenger, then the passenger can alter the route by recognizing and announcing additional verbal route information. For example, assuming the head unit is displaying a route via Interstate 5, and the driver tells a passenger, “I think I am going to take Highway 18 instead.” Alternatively, the passenger may say, “You should take Highway 18 because it is generally faster.” The system can process the new audible route information, and the head unit can present an updated route via Highway 18.Although the examples above describe the head unit as receiving verbal instructions from a passenger of the vehicle, the head unit can also receive other types of audible route information. For example, the driver may have input a destination into a map application executing on mobile device , which produced a route from the vehicle's current location to the destination. The map application may output, via an electronic speaker on the mobile device or a speaker of the vehicle, audible instructions for the driver to follow the route produced by the map application. Again, the system can capture the audible signals produced by the electronic speaker and process them to enable the head unit to present appropriate content to the driver or passenger of the vehicle .In addition to the audible information, the map application executing on the mobile device may display the route on a display screen of the mobile device . For example, the driver may have the mobile device affixed to or supported by a holder that positions the mobile device to be viewable by the driver. In this example, the camera may be positioned within the vehicle to capture images of the display screen of the mobile device .The images can be analyzed to identify any route information being displayed. Such route information may include a highlighted route on a map, textual instructions (e.g., “turn left at Sunset Avenue”), text displayed on the map, graphical instructions (e.g., a left turn arrow), graphics (including icons or other symbols) displayed on the map, mileage information, or other information that can identify where the vehicle is traveling and the proposed route of travel, or some combination thereof. The system can rely on image-recognition algorithms to make predictions about the analyzed data, and similar to using the audible instructions, the head unit can retrieve and present content corresponding to the identified visual objects, such as an anticipated route or destination of the vehicle . In addition to the anticipated route, the head unit can present other content to the driver or passenger of the vehicle .The head unit , another computer that is part of the vehicle , or the remote servers (or any combination of these computers) may process the images from the camera . Moreover, data from other systems may supplement the predictions generated from the analysis of the images. For example, the audible route information, whether received from the mobile device or an occupant of the vehicle , can be used to adjust the confidence factors of the identified objects (including text) or class labels, leading to certain candidates being dropped from consideration because they conflict with the supplemental audio data.Similarly, current and historical positional coordinates from the GPS system of the vehicle can be used to filter out certain items that may have been identified as possible candidates in the analyzed images. For example, if the system is aware of the current location of the vehicle and outputs a class label or textual data that is inconsistent with the geography of that location, the system can ignore this output. As a more specific example, if the vehicle is traveling in Texas and one of the outputs of the system predicts that text recognized from the map application of the mobile device reads as “Lake Michigan,” the system can correspondingly lower the corresponding confidence factor and ignore the prediction.In various embodiments, the head unit may be configured to communicate with other computing devices, such as mobile device or remote server . For example, the head unit may communicate with remote server via communication network to provide audio or image data to the remote server for processing, as described herein, or to request content or other information, such as in response to the identification of a future route of the vehicle . Accordingly, in some embodiments, the remote server may provide some of the functionality described herein. In various embodiments, the content may be provided to the mobile device for presentation to a user of the mobile device. In at least one embodiment, the mobile device may act as an intermediate device between the head unit and the remote server , although the head unit (or other computer) may be equipped with a long-range communications stack to exchange data with the remote server .The remote server is any computing device, such as a server, cloud resources, a smartphone or other mobile device, or other computing device, which is remote to the vehicle and that can provide content or other information to the head unit or the mobile device . Although the remote server is illustrated as a single device, embodiments are not so limited. Rather, the remote server may be one or more computer devices that perform functions.The mobile device includes any personal device capable of communicating with a head unit (or other computer) of the vehicle or remote server . The mobile device is configured and structured to send and receive information, content, or controls to and from the head unit (or another computer) or the remote server . Examples of the mobile device include laptop computers, smartphones, tablet computers, wearable computing devices, other smart devices, or other handheld computing devices.In some embodiments, the remote server , the head unit , and the mobile device communicate with each other via a communication network . The communication network is configured to couple various computing devices to transmit data from one or more devices to one or more other devices. Communication network may include various wireless networks using various forms of communication technologies and topologies, such as cellular networks, mesh networks, or the like.In various embodiments, the head unit communicates with the mobile device via a mobile device communication network . The mobile device communication network is configured to couple the mobile device with the head unit (or another computer) to transmit content/data between the mobile device and the head unit (or other computer). The information communicated between devices may include current accessory status or data, vehicle status information, requests to access accessory data, requests to control or modify an accessory, video data, voice data, image data, text data, or other types of content, data, or information. The communication network may include a variety of short-range wireless communication networks, such as personal area networks utilizing classic Bluetooth or Bluetooth Low Energy protocols, Wi-Fi, or USB or an IR optical network to enable communication between the mobile device and the head unit . The communication network may also be implemented using Internet connectivity over wide-area cellular networks (such as 4G and 5G networks).In various embodiments, the user may interact with the head unit via the mobile device such that the mobile device acts as a virtual head unit. In this way, user input provided to the head unit may be received from the user via the mobile device and transmitted from the mobile device to the head unit for processing by the head unit . Conversely, content to be presented to the user may be provided to the mobile device from the head unit and displayed or otherwise output to the user from the mobile device . In some other embodiments, the mobile device may perform the functionality of head unit , or the mobile device may project applications or other processes to the head unit .The mobile device communication network , the communication network , and the accessory communication network may be separate communication networks, as illustrated, or some of them may be the same communication network or share network components.Although the mobile device is described in some embodiments as being configured to communicate with the head unit , in some cases, the mobile device may not provide a destination or route information to the head unit via mobile device communication network or via communication network . In these scenarios, as described herein, the head unit obtains the route or destination information (i.e., driving direction commands) via audio commands captured from a passenger within the vehicle or audio or visual data generated by the mobile device .The head unit may also be configured to access or receive information or control use of the one or more accessories -The accessories -can include virtually any vehicle utility or device that provides information or data to the user, including data received from core components of the vehicle via the vehicle's Controller Area Network (CAN bus). Examples of these accessories include gas tank level gauge, speedometer, odometer, wiper activity, external temperature, oil pressure gauge, temperature gauge, tire pressure gauge, or other vehicle sensors that provide information to a user of the vehicle. Accessories -may also include applications executing on the head unit that provide information to the user or have two-way interactions with the user. Examples of these accessories include navigation, audio and radio controls, television or music applications, environmental control applications, vehicle performance or maintenance applications, or other applications.Accessories -may also include information from other sources. For example, in some embodiments, the accessories -may include “derived accessory data” from internal-facing cameras, external-facing cameras, or other input devices. Derived accessory data is information about an environment associated with the vehicle that can provide additional details or aspects. For example, images from a camera on the vehicle may be analyzed to determine which user is in the vehicle , which user is operating it, where the driver or other user is looking (e.g., whether they are talking to a passenger), whether there are pedestrians nearby, whether there are billboards or store signs next to the road or vehicle , etc.In some embodiments, the accessories -may also include any vehicle utility or device that is controllable by a user. Examples of these accessories include adjustable seats, sunroof, side mirrors, rear-view mirror, air conditioner, power windows, or other controllable features of the vehicle .It should be noted that some accessories may only output data, some accessories may only receive signals to manipulate the accessory, and some accessories may input and output data. For example, a speedometer may only output the current speed of the vehicle; a power window may only receive signals to move the window up or down, but not return any information to the head unit; and the navigation system may receive signals for a destination and return a suggested travel route to the destination. It should be further noted that these examples are non-exhaustive, and other types of accessories may also be employed.The head unit can communicate with the accessories -via an accessory communication network . The accessory communication network is configured to couple the accessories -with the head unit to transmit content/data between the accessories -and the head unit . The information communicated between devices may include current accessory status or data, accessory control data, video data, voice data, image data, text data, or other types of content, data, or information. The accessory communication network may include or support one or more physical networks; one or more wireless communication networks; one or more application program interfaces; or one or more other networks capable of transmitting data from one accessory to another, from an accessory to the head unit , or from the head unit to an accessory; or some combination thereof depending on the types of accessories communicating with the head unit . For example, the accessory communication network may include an automotive body communication network, such as a wired controller area network, short-range wireless communication network, such as personal area networks utilizing Bluetooth Low Energy protocols, or any other type of network.In some other embodiments, the head unit may act as an intermediate device that facilitates communication between the mobile device and the accessories -In this way, the head unit can act as a gateway between the mobile device and the accessories -to provide authentication and authorization for permitting or restricting the control of accessories -and the transfer of accessory information, which can enable a user to access information from or control accessories -via mobile device .show use case examples of various views of an interior of a vehicle and a mobile computing device in accordance with embodiments described herein. In particular, shows a use case example A of a view of an interior of a vehicle in accordance with embodiments described herein. Similar to what is described above, the vehicle includes a head unit , a camera and a microphone .The microphone is positioned and configured to capture audio signals from within an interior of the vehicle . As discussed elsewhere herein, the microphone is configured to capture audio signals of a passenger talking, audio output from the mobile device , or other audibly detectable information as the vehicle is being operated. The audio signals are analyzed to identify audible route information, such as directions to make a turn; look for a particular street, store, or location; continue for a particular amount of time or a particular distance; or other route information. This route information can then be used to select and present content to a user in the vehicle, as described herein.In this illustration, the microphone is positioned on the front windshield above the rearview mirror inside the vehicle . In other embodiments, the microphone may also be positioned elsewhere within the vehicle . For example, in some embodiments, the microphone may be embedded in the head unit . In other embodiments, the microphone may be positioned (not illustrated) in the steering wheel of the vehicle, at some other location on the dashboard of the vehicle , in a backseat area of the vehicle , etc. These example locations of where the microphone may be positioned are for illustrative purposes and are not to be considered limiting. Rather, other locations within the vehicle may also be utilized to house the microphone .Although microphone is illustrated as a single device, embodiments are not so limited. Rather, in some embodiments, the vehicle may include a plurality of microphones that are positioned at different locations throughout the interior of the vehicle . In this way, each microphone may be configured to capture audio signals from a different area or passenger within the vehicle , or they may be configured to jointly collect the audio signals independent of where they originate.As an example, the camera is positioned inside the vehicle to capture images of a display screen of the mobile device . In various situations, the mobile device may be positioned in a variety of different locations throughout the interior of the vehicle . However, if a user of the mobile device is using the mobile device to provide maps or route information to the user, then the mobile device is probably positioned within the vehicle so that the user can see a display screen of the mobile device . Accordingly, the camera is positioned within the vehicle to capture images of a display screen of the mobile device .In example A, the mobile device is illustrated as being positioned on the dashboard of the vehicle such that the driver (not illustrated) can see the display screen of the mobile device . Thus, the camera is positioned above the head unit such that it can capture images of the mobile device . In another embodiment, the camera may be positioned (not illustrated) in the roof of the vehicle to provide a wider viewing angle of the interior of the vehicle. In yet another embodiment, the camera may be positioned (not illustrated) near a back window of the vehicle to capture images of the mobile device when the mobile device is being held by a passenger in a back seat of the vehicle . These locations of the camera and mobile device are for illustrative purposes and are not to be considered limiting; rather, other locations within the vehicle may also be utilized to house the camera . Although not illustrated, in some embodiments, the vehicle may include multiple cameras to capture images of the mobile device from different angles or of different areas within the vehicle .In this example, as the vehicle is being operated, the camera is capturing images of the mobile device . The images from the camera are analyzed to determine if visual route information is presently being displayed on the mobile device . This visual route information may include a map, textual information, graphical information, or other identifiers of where the vehicle is expected to travel. This route information can then be used to select and present content to a user in the vehicle, as described herein. An example image of a mobile device captured from the camera is shown in .shows a use case example of a view of a mobile computing device in accordance with embodiments described herein. In particular, illustrates an example image B of a display screen of a mobile device , such as one captured by camera in . In this example, the display screen is displaying a map . The map includes a position icon and a route . The position icon illustrates the current location of the mobile device and thus the current location of the vehicle in which the mobile device is located. And the route represents the projected or estimate path of the vehicle.In various situations, a user of the mobile device may preselect a destination for the vehicle, such as an address, a store, a restaurant, a park, or some other location. The mobile device (or a remote server with which the mobile device is communicating) selects a route between the current location of the mobile device and the selected destination. The mobile device then augments the map to include a graphical representation of the route , such as by highlighting or changing the roads or streets on which the vehicle should travel to reach the selected destination.As described herein, the head unit of the vehicle can utilize this image of the display screen to identify the route without directly receiving from the mobile device any data related to the route . In various embodiments, as explained above, image recognition techniques can identify the map and related items being displayed on the display screen . Such image recognition techniques may identify street patterns, street (or city or building) names, points of interest (“POI”), or other displayed informationIn some other embodiments, other information that is being displayed by the display screen may be utilized to identify the route . For example, a turn arrow (such as the illustrated left turn arrow) along with a street name (such as “Chestnut 295”) may indicate an upcoming turn and the street on which the turn is to take place. In various embodiments, the system can compare the current location of the vehicle with the recognized street name and the turn arrow to determine the estimated route and where the vehicle is expected to travel. The other information may also include a distance measurement (e.g., 25 m), which can be further utilized by itself or in combination with the other displayed information to identify where the turn is located with respect to the current location of the vehicle. In various embodiments, one or multiple different pieces of information displayed on the display screen can be utilized to identify the route . Once the route is identified, the head unit can present content related to the route to the user (e.g., the driver or passenger in the vehicle), as described herein.The operation of certain aspects of the disclosure will now be described with respect to . In at least one of various embodiments, processes , , and described in conjunction with , respectively, may be implemented by or executed on one or more computing devices, such as head unit . In some embodiments, at least some (including all) of the functionality described herein may be implemented by or executed on mobile device , another computer that is part of the vehicle , or remote server .The processes , , and are not necessarily limited to the chronological orders shown here and may be executed without adhering to all the steps described here or with steps that are beyond those in the diagrams. To be clear, processes , , and are merely examples of flows that may be applicable to or adopt the principles described herein.illustrates a logical flow diagram generally showing one embodiment of a process for monitoring audible route information to select and provide content to a user in accordance with embodiments described herein. Process begins, after a start block, at block , where an interior of a vehicle is monitored for audio signals. In various embodiments, one or more microphones are employed to capture audio signals from within an interior of or otherwise near the vehicle. In some embodiments, the audio signals may originate from an electronic speaker on a mobile device, such as when the mobile device is executing a map application that is outputting audible instructions or when the mobile device is in speakerphone mode with a person on the other end of the phone call providing directions or driving instructions. The audio signals may also be broadcast from a speaker that is part of the vehicle, such as if the mobile device is wirelessly coupled to the speaker. In other embodiments, the audio signals may originate from passengers within the vehicle, such as when a passenger is giving verbal driving directions or instructions to the driver.In yet other embodiments, the audio signals may be collected prior to being output via an electronic speaker associated with the head unit. For example, assume a user is using the head unit to provide hands-free speakerphone capability. In this scenario, the mobile device of the user is utilizing a wireless communication protocol (e.g., Bluetooth) to send audio signal information of a phone call to the head unit. The head unit can then output the phone call via the vehicle's speaker system. In this embodiment, the information received from the mobile device via the wireless communication protocol is analyzed prior to being output via the speaker system of the vehicle.Process proceeds to block , where received audio signals are analyzed for route information. In various embodiments, one or more speech-recognition techniques are applied to the audio signals to convert them into textual form. In some embodiments, one or more audio filters may be employed to separate multiple voices being captured in the audio signals. Non-limiting examples of such filters may include pitch filters (e.g., to distinguish between male and female voices), or accent detectors (e.g., to distinguish between different dialects or pronunciations), etc. As another example, the system can be configured to recognize the speech of one or more users to help distinguish between speakers based on differences in the acoustic features of their voices.Process continues at decision block , where a determination is made whether the audio signals include route information. In some embodiments, natural-language-processing models process the converted text to identify keywords or phrases that are associated with different types of route information. Examples of such keywords or phrases may include “turn,” “continue for,” “next to,” “exit,” “your destination is ahead,” “follow the signs,” etc. One or more different keywords or phrases may be associated with one or more separate pieces of route information. In various embodiments, these keywords or phrases may be combined with names, times, distances, colors, addresses, or other descriptive information that, when combined, represents the route information.If the audio signals include route information, process flows to block ; otherwise, process loops to block to continue to monitor the interior of the vehicle for audio signals.At block , a current location of the vehicle is determined. In various embodiments, a GPS unit or other location-based tracking system may be employed to determine the current location of the vehicle.Process proceeds next to block , where an anticipated route of the vehicle is determined based on the route information. The anticipated route may be one or more streets, one or more turns, one or more GPS coordinates, an address, a physical location, a course or way of travel between two points, or any other information that defines where the vehicle may be travelling.In various embodiments, one or more pieces of route information, along with the current location of the vehicle, are utilized to identify the anticipated route, such as via a mapping database query. For example, if the only route information is “turn left at Sunset Avenue,” then the anticipated route may be calculated from a current location of the vehicle along the current road in which the vehicle is travelling (which may be determined based on continuous GPS monitoring compared with one or more mapping features) to an intersection with Sunset Avenue. Conversely, if the route information is “turn right on Jones Street, go about one kilometer, then turn left at the top of the hill,” then each separate piece of route information is combined to determine a longer, more detailed anticipated route.In other embodiments, the anticipated route may also include an additional anticipated route beyond the current route information. This additional anticipated route may be for a particular distance to a particular traffic feature or condition from the route information. For example, if the route information is “turn left at Sunset Avenue,” then the anticipated route may include one additional kilometer or to a next intersection after the turn on Sunset Avenue.In some embodiments, the anticipated route may also be selected from a database that stores previous destination or routes traveled by the vehicle, which is described in more detail below in conjunction with . For example, if the vehicle has stored multiple previous routes and each time the vehicle turns right on Jones Street it then turns left on Sunset Avenue, then the anticipated route can include the turn on Sunset Avenue, even if that information was not part of the route information.In some other embodiments, the current location of the vehicle may not be used, and thus block may be optional and may not be performed. For example, if the route information is “it's in the same parking lot as Restaurant_XYZ,” then a database can be queried for the address of Restaurant_XYZ, which can be used as the anticipated route. If, however, the current location of the vehicle is obtained, then the anticipated route can include all streets and turns between the current location and Restaurant_XYZ.Process continues next at block , where content is selected based on the anticipated route. In various embodiments, a database stores content related to various locations, such as thoroughfares or other travel paths. The anticipated route is then compared to or queried against the database, and content associated with the anticipated route is then retrieved. The content that is retrieved may be based on any number of predetermined factors, such as historical activity related to the user or vehicle, associations with establishments maintained by the user, or data received from the accessories -For example, if the address of a store or restaurant is on the anticipated route or within a selected distance therefrom and the user has visited one of them recently, then an advertisement for that store or restaurant can be retrieved.In another example, a traffic service may provide in real-time the locations of traffic accidents, heavy traffic, construction, poorly maintained roads, or other traffic conditions. If the anticipated route is through an area associated with poor traffic conditions (e.g., an accident), then the selected content may be data retrieved from this service, and an alternative route may be generated.Process proceeds to block , where the content is presented to a user (e.g., the driver or other passenger) of the vehicle. In various embodiments, the content is provided to the user via a display device, an audio output device, or a haptic interface.As described herein, the content may be visual content, audio content, tactile content, or some combination thereof. For example, if the selected content is an advertisement, then the head unit may output an audible description of the restaurant, the hours of operation, or a current discount or sale. As another example, if the selected content includes an alternative route, then the head unit may display a navigation interface with a graphical image of the alternative route from the vehicle's current location.After block , process loops to block to continue to monitor the interior of the vehicle for additional audio signals to identify additional route information. In various embodiments, the head unit may continuously monitor the vehicle for audio signals. In other embodiments, the head unit may monitor the vehicle for audio signals at selected intervals or for a selected period of time, or this feature may be voice activated. As an example, if the audio signals are analyzed and the user is listening to music, then the head unit may modify how often it monitors the vehicle for audio signals to be at a slower frequency (e.g., once every five minutes, rather than once every five seconds). But if the audio signals include route information, then the monitor rate may be increased or made continuous to capture successive pieces of route information. Speaker-recognition models can also be used to distinguish a user's voice from spoken words or singing originating from the vehicle's entertainment system.In some other embodiments, the head unit may utilize previous route information to determine when to next monitor the vehicle for another piece of route information. For example, the previous route information indicates that the user is to drive straight for 30 minutes, then the head unit may wait for 25 minutes before it next monitors the vehicle for audio signals that may contain route information. These examples are for illustrative purposes, and other embodiments may employ other values.illustrates a logical flow diagram generally showing one embodiment of a process for monitoring images of a display screen of a mobile computing device to identify visual route information to select and present content to a user in accordance with embodiments described herein.Process begins, after a start block, at block , where one or more images of a display screen of a user's mobile device are captured. In some embodiments, image-recognition techniques may be performed to identify or determine if a mobile device display screen is visible in the captured images.Process proceeds to block , where the captured images are analyzed for route information. In various embodiments, one or more image recognition-techniques are applied to the images to identify maps, highlighted routes, words or phrases, graphics, icons, or other directions or driving instructions that are being displayed on the display screen of the mobile device.Process continues at decision block , where a determination is made whether the images include route information. In some embodiments, any identified words or phrases detected in the images are processed to identify keywords or phrases that are associated with different types of route information, similar to what is described above in conjunction with block in with respect to the audio route information. Similarly, various different graphics, icons, mapping information, etc., from the images may be processed to determine if the image includes visual route information. In the case of text, the symbols extracted from the images can be mapped against a database of known characters or abstract representations of the characters. In addition, adaptive-recognition techniques, which rely on features or characters identified with a high degree of confidence to recognize adjacent, unidentified features or characters, may be employed.If the images include route information, process flows to block ; otherwise, process loops to block to continue to capture additional images of the display screen of the user's mobile device.At block , a current location of the vehicle is determined. In various embodiments, block may employ embodiments of block in to determine the current location of the vehicle.Process proceeds next to block , where an anticipated route of the vehicle is determined based on the route information. In various embodiments, block may implement embodiments of block in to determine the anticipated route. For example, one or more mapping databases may be accessed and the route information queried to predict where the vehicle is headed. Previous travel history, including for the current trip or other excursions from the past, may also be considered.Process continues next at block , where content is selected based on the anticipated route. In various embodiments, block may implement embodiments of block in to determine the anticipated route.Process proceeds to block , where the content is provided to a user of the vehicle. In various embodiments, block may implement embodiments of block in to provide the selected content, to determine the current location of the vehicle.After block , process loops to block to continue to capture images of the display screen of the user's mobile device. Similar to what is described above in conjunction with , process may capture images at selected time periods or intervals or based on the route information itself.illustrates a logical flow diagram generally showing one embodiment of a process for selecting a previous route based on a current route and an anticipated route to select and provide content to a user in accordance with embodiments described herein.Process begins, after a start block, at block , where a plurality of previous routes are stored. In various embodiments, the routes may be stored as a plurality of physical location data points, as a plurality of turns, a series of street names, etc. The previous routes may have been selected or programmed by a user, or they may be determined based on historical travel patterns of the user or vehicle.Process proceeds to block , where a plurality of current location data for the vehicle is received. In some embodiments, the plurality of current location data is obtained by monitoring a GPS device or other location-based device, which may be for a predetermined amount of time, such as from when the vehicle was turned on. As such, the plurality of current location data may be a set of location data collected over the vehicle's present route or multiple routes over the course of some period.Process continues to block , where a current route of the vehicle is generated based on the plurality of current location data. In various embodiments, the plurality of current location data is compared to one another and to a map to generate the current route. As an example, the current location data identifies individual locations where the vehicle has previously been during the current operation of the vehicle, and the current route indicates a path in which the vehicle has previously traveled during its current operation.Process proceeds next to block , where an anticipated route of the vehicle is determined. In various embodiments, the anticipated route is determined from block in or block in .Process continues next at block , where a previous route from the stored plurality of previous routes is selected based on a previous route having a first segment that matches the current route of the vehicle and a second segment that matches the anticipated route of the vehicle.Process then proceeds to block , where content is selected based on the selected previous route. In various embodiments, block may implement embodiments of block in to select content to be provided to the user based on the previous route, rather than the anticipated route, as described in . As with earlier examples, process may result in content being presented to the user without the user's direct interaction with the system.shows a system diagram that describes one implementation of computing systems for implementing embodiments described herein. In this example, system includes head unit and one or more other computing devices .As described herein, head unit is a computing device that can perform functionality described herein for monitoring audio signals or images of a user's mobile device for route information to determine an anticipated route that is used to select and provide associated content to a user. One or more special-purpose computing systems may be used to implement the head unit . Accordingly, various embodiments described herein may be implemented in software, hardware, firmware, or in some combination thereof. The head unit includes memory , one or more processors , display , input/output (I/O) interfaces , other computer-readable media , network interface , and other components . The head unit may also be in communication with a camera , a microphone , or both. The camera or the microphone , or both, may be separate from or external to the head unit , as illustrated. In some embodiments, the camera , the microphone , or some combination thereof may be embedded in or otherwise incorporated in head unit , such as other components .Processor includes one or more processors that execute computer instructions to perform actions, including at least some embodiments described herein. In various embodiments, the processor may include one or more central processing units (CPUs), programmable logic, or other processing circuitry.Memory may include one or more various types of non-volatile and/or volatile storage technologies. Examples of memory include flash memory, hard disk drives, optical drives, solid-state drives, various types of random-access memory (RAM), various types of read-only memory (ROM), other computer-readable storage media (also referred to as processor-readable storage media), or other memory technologies, or any combination thereof. Memory may be utilized to store information, including computer-readable instructions that are utilized by processor to perform actions, including at least some embodiments described herein.Memory may have stored thereon various modules, such as automobile (or vehicle) monitoring module and content presentation module . The automobile monitoring module provides functionality to capture and analyze audio signals or images from the microphone or camera , respectively, for route information, as described herein. The content presentation module provides functionality to determine an anticipated route from the detected route information. In some embodiments, the content presentation module requests associated content from another computing device, such as other computing devices , which may include remote server in . In other embodiments, the content presentation module itself selects the associated content. Once selected or received, the content presentation module provides the content to the user, such as via display , other I/O interfaces , or other components .Memory may also store other programs and other content . Other programs may include operating systems, user applications, or other computer programs. Content may include visual, audio, or tactile content to provide to the user, as described herein.Display is a display device capable of rendering content to a user. In various embodiments, the content selected by the content presentation module is presented to the user via the display . The display may be a liquid crystal display, light emitting diode, or other type of display device, and may include a touch sensitive screen capable of receiving inputs from a user's hand, stylus, or other object.I/O interfaces may include interfaces for various other input or output devices, such as audio interfaces, other video interfaces, tactile interface devices, USB interfaces, physical buttons, keyboards, or the like. In some embodiments, the I/O interfaces provide functionality for the head unit to communicate with the camera or the microphone . In other embodiments, the I/O interfaces provide functionality of the head unit to output content via display devices, audio output devices, or haptic interface devices that are separate from the head unit , for providing visual, audible, or tactile content, respectively, to the user of the vehicle.As an example, the camera is a camera positioned and configured to capture images of a display screen of a user's mobile device (not illustrated). The microphone is a microphone positioned and configured to capture audio from within an interior of the vehicle (not illustrated).Other computer-readable media may include other types of stationary or removable computer-readable media, such as removable flash drives, external hard drives, or the like. Network interfaces are configured to communicate with other computing devices, such as the other computing devices , via a communication network . Network interfaces include transmitters and receivers (not illustrated) to send and receive data as described herein. The communication network may include the communication network or the mobile device communication network of .The other computing devices are computing devices that are remote from the head unit or part of the vehicle and in some embodiments, can perform functionality described herein for processing incoming data (such as audio or visual data) and delivering corresponding content to present to a user of the head unit . As explained earlier, the content presented may be based on an anticipated route. The other computing devices may include remote server or mobile device in or some other computer that is part of the vehicle .One or more special-purpose computing systems may be used to implement the other computing devices . Accordingly, various embodiments described herein may be implemented in software, hardware, firmware, or in some combination thereof.The other computing devices include memory , one or more processors , display , I/O interfaces , and network interface , which may be similar to or incorporate embodiments of memory , processor , display , I/O interfaces and network interface of head unit , respectively. Thus, processor includes one or more processors that execute computer instructions to perform actions, including at least some embodiments described herein. In various embodiments, the processor may include one or more central processing units (CPUs), programmable logic, or other processing circuitry. Memory may include one or more various types of non-volatile and/or volatile storage technologies. Memory may be utilized to store information, including computer-readable instructions that are utilized by processor to perform actions described herein. Memory may also store programs and content . The programs may include a content selection module, not illustrated, similar to content presentation module that selects and provides content to the head unit based on information received from the head unit .The various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
Method for generating a voice announcement as feedback to a handwritten user input, corresponding control device, and motor vehicle
Номер патента: US11975729B2. Автор: Jan Dusik. Владелец: Audi AG. Дата публикации: 2024-05-07.