US20150172607A1 - Providing vicarious tourism sessions - Google Patents

Providing vicarious tourism sessions Download PDF

Info

Publication number
US20150172607A1
US20150172607A1 US14/141,194 US201314141194A US2015172607A1 US 20150172607 A1 US20150172607 A1 US 20150172607A1 US 201314141194 A US201314141194 A US 201314141194A US 2015172607 A1 US2015172607 A1 US 2015172607A1
Authority
US
United States
Prior art keywords
user
tourism
docent
candidate
docents
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/141,194
Inventor
Udi Manber
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US14/141,194 priority Critical patent/US20150172607A1/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MANBER, UDI
Publication of US20150172607A1 publication Critical patent/US20150172607A1/en
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1069Session establishment or de-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences

Definitions

  • This specification relates to interactive environments that connect network-enabled communication devices.
  • Various types of devices e.g., desktop computers and mobile phones, can communicate with one another using various data communication networks, e.g., the Internet.
  • This specification describes technologies relating to providing vicarious tourism sessions to users.
  • one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of receiving a request from a user of a user device to participate in a vicarious tourism session; selecting candidate docents, wherein a docent is a user who has registered to participate in vicarious tourism sessions that are relevant to a particular geographic location; providing data identifying the candidate docents for presentation to the user device; receiving a user input selecting a candidate docent; and initiating a vicarious tourism session between the selected docent and the user, wherein initiating the vicarious tourism session comprising providing a video feed of video captured from a session accessory worn by the selected docent to the user device for presentation to the user.
  • inventions of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
  • a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
  • One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
  • the request can specify one or more tourism parameters
  • selecting candidate docents can include selecting as candidate docents available docents who have registered to provide vicarious tourism sessions matching the tourism parameters specified in the request. Selecting candidate docents can further include: identifying available docents.
  • the actions can further include: ranking the candidate docents based on a respective cost to the user to participate in a vicarious tourism session with each candidate docent or on user reviews of previous tourism sessions with each candidate docent.
  • Providing data identifying the candidate docents can include: providing a map interface for presentation to the user, wherein the map interface identifies locations where candidate docents are available to participate in interactive tourism sessions.
  • the actions can further include: overlaying relevant information over the video feed for presentation to the user, wherein the relevant information is relevant to the geographic location of the selected docent.
  • the actions can further include: receiving an input from the user; determining that the input is an instruction for the selected docent; and translating the input into one of a standardized set of commands that is understandable by the selected docent.
  • the standardized set of commands can include one or more of an audio signal, a touch signal, or a visual signal.
  • FIG. 1 shows an example vicarious tourism session system.
  • FIG. 2 is a flow diagram of an example process for initiating a vicarious tourism session.
  • FIG. 1 shows an example vicarious tourism session system 140 .
  • the vicarious tourism session system 140 is an example of a system implemented as computer programs on one or more computers in one or more locations, in which the systems, components, and techniques described below can be implemented.
  • a user can interact with the vicarious session system 140 using a user device 130 through a data communication network 102 .
  • the network 102 enables data communication between multiple electronic devices. Users can access content, provide content, exchange information, and participate in vicarious tourism sessions by use of the devices and systems that can communicate with each other over the network 102 .
  • the network 102 can include, for example, a local area network (LAN), e.g., a Wi-Fi network, a cellular phone network, a wide area network (WAN), e.g., the Internet, or a combination of them.
  • LAN local area network
  • WAN wide area network
  • a user device 130 is an electronic device, or collection of devices, that is under the control of a user and is capable of interacting with the vicarious tourism session system 140 over the network 102 .
  • Example user devices 130 include personal computers 132 , mobile communication devices 134 , and other devices that can send and receive data over the network 102 .
  • a user device 130 is typically configured with a user application, e.g., a web browser, that sends and receives data over the network 102 , generally in response to user actions.
  • the user application can enable a user to display and interact with text, images, videos, music and other content, which can be located on a web page on the World Wide Web or a local area network.
  • the vicarious tourism session system 140 allows people to organize, request, and participate in vicarious tourism sessions.
  • a person interacts with another person to allow one of the people, who for convenience may be referred to as the visitor, to experience visiting a particular physical location or point of interest, e.g., by viewing live video being captured by the other person.
  • the visitor acts purely as an observer. In other instances, however, the visitor can play an active role, and information goes in both directions during the session.
  • the term “vicarious tourism session” may thus refer to such an interaction, the period of interaction, or a recording of such an interaction, as the context requires.
  • the vicarious tourism session system 140 allows visitors using user devices 130 to experience visiting a particular physical location or point of interest by viewing video being captured by docents using session accessories 160 . That is, during the vicarious tourism session, the vicarious tourism system 140 provides a video stream of video and audio captured by the session accessory 160 worn by the docent to a user device 130 for presentation to the user.
  • session accessories 160 are devices that a person can use to participate in sessions with the vicarious tourism session system 140 .
  • Session accessories 160 will typically be portable, personal, multimode, e.g., audio and video, electronic devices.
  • Session accessories 160 can be, for example, wearable computing devices that include a camera and a microphone that may be worn on a user's person.
  • a session accessory 160 can include a hat camera system that includes a camera mounted on a hat, e.g., on the brim of a hat, which can connect wirelessly to the vicarious tourism session system 140 , e.g., by connecting wirelessly to a mobile computing system, e.g., a mobile phone or other mobile device, that can connect to the vicarious tourism session system 140 or by connecting to the vicarious tourism session system 140 over a Wi-Fi network.
  • a hat camera system is described in more detail in U.S. patent application Ser. No. 61/781,506, entitled “Wearable Camera Systems” and filed on Mar. 14, 2013.
  • Session accessories 160 can also include other camera systems worn by a person that provide point-of-view video data, for example, a helmet-mounted camera.
  • a session accessory 160 is a system that includes one or more of an audio input device 160 - 1 , a video input device 160 - 2 , a display device 160 - 3 , and optionally other input devices, e.g., for text or gesture input.
  • a session accessory 160 may be used during a vicarious tourism session to broadcast video taken from the point of view of a docent wearing the session accessory 160 to another user participating in the session.
  • the mobile device can be configured to communicate with the vicarious tourism session system 140 through an application executing on the mobile device.
  • a session accessory 160 may include multiple video input devices 160 - 2 that, after being processed by the vicarious tourism session system 140 , can provide video feeds that present panoramic views without distortion to other users participating in a session.
  • the video input device 160 - 2 may include image stabilization features to improve the stability of the video feed captured by the session accessory 160 and provided to other users participating in the session.
  • the zoom and position of the video input devices 160 - 2 may be controllable by other users participating in a session by submitting an input to the vicarious tourism session system 140 .
  • a docent is a user who has registered with the vicarious tourism session system 140 in order to be accepted by the system to provide vicarious tourism sessions that are relevant to a specified geographic location or region or to a particular point of interest. For example, a user may register to be a docent who provides vicarious tours of the Great Wall of China or of Paris, France using a session accessory 160 . Vicarious tourism sessions are described in more detail below with reference to FIG. 2 .
  • Completed vicarious tourism sessions can be stored as session data 142 so that they can be replayed by the user or, with the visitors and docent's consent, by other users.
  • FIG. 2 is a flow diagram of an example process 200 for initiating a vicarious tourism session between a user and a docent.
  • the process 200 will be described as being performed by a system of one or more computers located in one or more locations.
  • a vicarious tourism session system e.g., the vicarious tourism session system 140 of FIG. 1 , appropriately programmed, can perform the process 200 .
  • the system receives a request from a user of a user device to participate in a vicarious tourism session (step 202 ).
  • the request can specify one or more tourism parameters.
  • the request may identify a particular location or a particular point of interest that the user would like to tour.
  • the request may specify a date and a time, or a range of dates and times, for the vicarious tourism session.
  • the request may specify a maximum amount of money the user is willing to pay to participate in the vicarious tourism session.
  • the request may specify one or more types of vicarious tourism sessions the user would like to participate in, e.g., a monument visit session, a nature hike session, an architectural tour session, and so on.
  • the request may identify one or more docents that the user would like to provide the vicarious tourism session.
  • the system can assign a default value for the parameter.
  • the system selects candidate docents in response to the request (step 204 ).
  • the system can select as a candidate docent any available docent that has registered with the system to provide a vicarious tourism session that meets the tourism parameters specified in the received request.
  • the system can determine which docents of the docents registered with the system are available in any of a variety of ways.
  • the system can make the determination based on availability data received from the docents that identifies time periods during which the docents are available to give tours.
  • the system can make the determination based on which docents are currently logged in to the system or based on data that identifies a current presence status of the docents, i.e., active, idle, busy, or offline.
  • the system provides data identifying the candidate docents to the user device (step 206 ).
  • the system can provide a map interface for presentation to the user that identifies locations of vicarious tourism sessions provided by the candidate docents. Once the user submits an input selecting a location, the system can provide another user interface through which the user can request a virtual tourism session.
  • the system can rank the available sessions based on, e.g., cost to the user to participate in the session, on user reviews of previous tourism sessions provided by a given docent, e.g., on reviews by the user, reviews by other users, or both, or on whether the docent has expertise in something of interest to the user, and provide the sessions for display in an order according to the ranking
  • the user interface can allow the user to sort the identified sessions according to any of a set of criteria.
  • the system receives a user input selecting a candidate docent (step 208 ) and initiating a vicarious tourism session between the docent and the user of the user device (step 210 ).
  • a candidate docent e.g., within a predetermined time window of the user
  • the system can initiate the vicarious tourism session between the docent and multiple users.
  • the vicarious tourism session is a “point of view” session in which a docent wearing a session accessory offers a user of a user device the experience of visiting a particular physical location or point of interest.
  • the system provides a video stream of video captured by the session accessory of the docent to one or more user devices for presentation to one or more users.
  • the docent may also include in the video stream video or audio from other sources, e.g., from a video camera with high quality zoom lenses on a sturdy mounting.
  • the video stream includes video from a mounted video camera configured to communicate with the system
  • the user may be able to control the zoom and the position of the camera by providing an input to the system.
  • the user may be able to control the zoom and the position of the camera or video input device of the session accessory by providing an input to the system.
  • the system prior to initiating the vicarious tourism session, the system allows the docent the opportunity to agree to participate in the session or for the user and the docent to negotiate the terms of the vicarious tourism session, e.g., by providing a user interface identifying the vicarious tourism and the user for presentation to the user or by establishing a channel of communication between the user and the docent.
  • the docent can provide an itinerary or a route for the session.
  • the system can then provide information identifying the itinerary or route for display to the user, e.g., prior to the user selecting a session.
  • the user may have an option to generate an itinerary or route, e.g., by interacting with a map interface provided by the system, uploading an existing itinerary or route, or by selecting from available itineraries or routes maintained by the system for the location or point or interest.
  • the system may provide the user-created, user-uploaded, or user-selected itinerary or route to the docent for approval before initiating the vicarious tourism session.
  • a user may be able to interact with the docent during the vicarious tourism session. For example, the user may be able to pose questions or make requests to the docent as part of the vicarious tourism session. In particular, the user may be able to give directions to the docent on where to go next, where to look, how fast to go, and so on.
  • the system can receive input from the user and translate it into one of a standardized set of statements that will be understandable by the docent.
  • the system can receive an input from the user that specifies an instruction for the docent, e.g., a user selection of a user interface element that indicates that the user wants the docent to move in a specific direction, a user swipe movement on a touchscreen display of the user device in the specific direction, or a user movement of an input device in the specific direction, and translate the inputs so that they may be understood by the docent.
  • the system can translate the instruction into an audio command in a language that is spoken by the docent, e.g., a language specified in a user profile of the docent or a most-commonly spoken language where the docent is located.
  • the system can generate one of a pre-defined set of audio signals that correspond to the received instruction, e.g., a hum, a bang, a squeak, and so on.
  • a docent may be possible, e.g., by causing the session accessory or a mobile device wirelessly connected to the session accessory to vibrate, by causing a portion of the session accessory to move and contact the docent in a pre-determined fashion, i.e., by causing a pre-determined touch signal to be applied to the docent, or by generating a signal that is visible to the docent while wearing the session accessory, e.g., if the session accessory includes a display.
  • the system can overlay various kinds of information over the video stream that is presented to the user.
  • the system can overlay a map that displays the current location of the docent, e.g., obtained by the system from the wearable production device or from a different device on the docent's person.
  • the map may also display the proposed route for the session.
  • the system can, based on the location information or by applying object recognition techniques to the video being captured by the session accessory, detect points of interest or other geographic entities near the location of the docent and overlay information about the points of interest or other geographic entities in proximity to the docent.
  • the system can overlay information obtained from a social network account of the user, e.g., images of past visits to the location by the user or by other users in the user's social network, status updates by users in the user's social network, and so on.
  • Embodiments of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus.
  • the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • a computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them.
  • a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal.
  • the computer storage medium can also be, or be included in, one or more separate physical components or media.
  • the operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
  • data processing apparatus encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing.
  • the apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them.
  • the apparatus and execution environment can realize various different computing model infrastructures, e.g., web services, distributed computing and grid computing infrastructures.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
  • a computer program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code.
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a smart phone, a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, and a wearable computer device, to name just a few.
  • Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, magnetic disks, and the like.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • embodiments of the subject matter described in this specification can be implemented on a computer having a display device for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device for displaying information to the user
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input and output.

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing vicarious tourism sessions. In one aspect, a method includes receiving a request from a user of a user device to participate in a vicarious tourism session; selecting candidate docents, wherein a docent is a user who has registered to participate in vicarious tourism sessions that are relevant to a particular geographic location; providing data identifying the candidate docents for presentation to the user device; receiving a user input selecting a candidate docent; and initiating a vicarious tourism session between the selected docent and the user, wherein initiating the vicarious tourism session comprising providing a video feed of video captured from a session accessory worn by the selected docent to the user device for presentation to the user.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application claims priority to U.S. Provisional Application No. 61/785,085, filed on Mar. 14, 2013. The disclosure of the prior application is considered part of and is incorporated by reference in the disclosure of this application.
  • BACKGROUND
  • This specification relates to interactive environments that connect network-enabled communication devices. Various types of devices, e.g., desktop computers and mobile phones, can communicate with one another using various data communication networks, e.g., the Internet.
  • SUMMARY
  • This specification describes technologies relating to providing vicarious tourism sessions to users.
  • In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of receiving a request from a user of a user device to participate in a vicarious tourism session; selecting candidate docents, wherein a docent is a user who has registered to participate in vicarious tourism sessions that are relevant to a particular geographic location; providing data identifying the candidate docents for presentation to the user device; receiving a user input selecting a candidate docent; and initiating a vicarious tourism session between the selected docent and the user, wherein initiating the vicarious tourism session comprising providing a video feed of video captured from a session accessory worn by the selected docent to the user device for presentation to the user.
  • Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
  • A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
  • The foregoing and other embodiments can each optionally include one or more of the following features, alone or in combination. The request can specify one or more tourism parameters, and selecting candidate docents can include selecting as candidate docents available docents who have registered to provide vicarious tourism sessions matching the tourism parameters specified in the request. Selecting candidate docents can further include: identifying available docents.
  • The actions can further include: ranking the candidate docents based on a respective cost to the user to participate in a vicarious tourism session with each candidate docent or on user reviews of previous tourism sessions with each candidate docent.
  • Providing data identifying the candidate docents can include: providing a map interface for presentation to the user, wherein the map interface identifies locations where candidate docents are available to participate in interactive tourism sessions.
  • The actions can further include: overlaying relevant information over the video feed for presentation to the user, wherein the relevant information is relevant to the geographic location of the selected docent.
  • The actions can further include: receiving an input from the user; determining that the input is an instruction for the selected docent; and translating the input into one of a standardized set of commands that is understandable by the selected docent.
  • The standardized set of commands can include one or more of an audio signal, a touch signal, or a visual signal.
  • Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages. Users of a vicarious tourism system can easily experience touring various locations or points of interest through vicarious tourism sessions without having to be physically located in the location or at the point of interest. During the vicarious tourism session, a user can easily communicate with a docent giving the tour even if the user and the docent do not speak the same language.
  • The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows an example vicarious tourism session system.
  • FIG. 2 is a flow diagram of an example process for initiating a vicarious tourism session.
  • Like reference numbers and designations in the various drawings indicate like elements.
  • DETAILED DESCRIPTION
  • FIG. 1 shows an example vicarious tourism session system 140. The vicarious tourism session system 140 is an example of a system implemented as computer programs on one or more computers in one or more locations, in which the systems, components, and techniques described below can be implemented.
  • A user can interact with the vicarious session system 140 using a user device 130 through a data communication network 102. The network 102 enables data communication between multiple electronic devices. Users can access content, provide content, exchange information, and participate in vicarious tourism sessions by use of the devices and systems that can communicate with each other over the network 102. The network 102 can include, for example, a local area network (LAN), e.g., a Wi-Fi network, a cellular phone network, a wide area network (WAN), e.g., the Internet, or a combination of them.
  • A user device 130 is an electronic device, or collection of devices, that is under the control of a user and is capable of interacting with the vicarious tourism session system 140 over the network 102. Example user devices 130 include personal computers 132, mobile communication devices 134, and other devices that can send and receive data over the network 102. A user device 130 is typically configured with a user application, e.g., a web browser, that sends and receives data over the network 102, generally in response to user actions. The user application can enable a user to display and interact with text, images, videos, music and other content, which can be located on a web page on the World Wide Web or a local area network.
  • The vicarious tourism session system 140 allows people to organize, request, and participate in vicarious tourism sessions. In a vicarious tourism session, a person interacts with another person to allow one of the people, who for convenience may be referred to as the visitor, to experience visiting a particular physical location or point of interest, e.g., by viewing live video being captured by the other person. In some instances, the visitor acts purely as an observer. In other instances, however, the visitor can play an active role, and information goes in both directions during the session. The term “vicarious tourism session” may thus refer to such an interaction, the period of interaction, or a recording of such an interaction, as the context requires.
  • In particular, the vicarious tourism session system 140 allows visitors using user devices 130 to experience visiting a particular physical location or point of interest by viewing video being captured by docents using session accessories 160. That is, during the vicarious tourism session, the vicarious tourism system 140 provides a video stream of video and audio captured by the session accessory 160 worn by the docent to a user device 130 for presentation to the user.
  • In general, session accessories 160 are devices that a person can use to participate in sessions with the vicarious tourism session system 140. Session accessories 160 will typically be portable, personal, multimode, e.g., audio and video, electronic devices. Session accessories 160 can be, for example, wearable computing devices that include a camera and a microphone that may be worn on a user's person. For example, a session accessory 160 can include a hat camera system that includes a camera mounted on a hat, e.g., on the brim of a hat, which can connect wirelessly to the vicarious tourism session system 140, e.g., by connecting wirelessly to a mobile computing system, e.g., a mobile phone or other mobile device, that can connect to the vicarious tourism session system 140 or by connecting to the vicarious tourism session system 140 over a Wi-Fi network. An example hat camera system is described in more detail in U.S. patent application Ser. No. 61/781,506, entitled “Wearable Camera Systems” and filed on Mar. 14, 2013. Session accessories 160 can also include other camera systems worn by a person that provide point-of-view video data, for example, a helmet-mounted camera. Generally, a session accessory 160 is a system that includes one or more of an audio input device 160-1, a video input device 160-2, a display device 160-3, and optionally other input devices, e.g., for text or gesture input.
  • A session accessory 160 may be used during a vicarious tourism session to broadcast video taken from the point of view of a docent wearing the session accessory 160 to another user participating in the session. In implementations where the session accessory 160 connects wirelessly to a mobile device, the mobile device can be configured to communicate with the vicarious tourism session system 140 through an application executing on the mobile device. Optionally, a session accessory 160 may include multiple video input devices 160-2 that, after being processed by the vicarious tourism session system 140, can provide video feeds that present panoramic views without distortion to other users participating in a session. Further optionally, the video input device 160-2 may include image stabilization features to improve the stability of the video feed captured by the session accessory 160 and provided to other users participating in the session. Further optionally, the zoom and position of the video input devices 160-2 may be controllable by other users participating in a session by submitting an input to the vicarious tourism session system 140.
  • A docent is a user who has registered with the vicarious tourism session system 140 in order to be accepted by the system to provide vicarious tourism sessions that are relevant to a specified geographic location or region or to a particular point of interest. For example, a user may register to be a docent who provides vicarious tours of the Great Wall of China or of Paris, France using a session accessory 160. Vicarious tourism sessions are described in more detail below with reference to FIG. 2.
  • Completed vicarious tourism sessions can be stored as session data 142 so that they can be replayed by the user or, with the visitors and docent's consent, by other users.
  • FIG. 2 is a flow diagram of an example process 200 for initiating a vicarious tourism session between a user and a docent. For convenience, the process 200 will be described as being performed by a system of one or more computers located in one or more locations. For example, a vicarious tourism session system, e.g., the vicarious tourism session system 140 of FIG. 1, appropriately programmed, can perform the process 200.
  • The system receives a request from a user of a user device to participate in a vicarious tourism session (step 202). The request can specify one or more tourism parameters. For example, the request may identify a particular location or a particular point of interest that the user would like to tour. The request may specify a date and a time, or a range of dates and times, for the vicarious tourism session. The request may specify a maximum amount of money the user is willing to pay to participate in the vicarious tourism session. The request may specify one or more types of vicarious tourism sessions the user would like to participate in, e.g., a monument visit session, a nature hike session, an architectural tour session, and so on. The request may identify one or more docents that the user would like to provide the vicarious tourism session. In some implementations, if the request does not specify values for one or more of the parameters, the system can assign a default value for the parameter.
  • The system selects candidate docents in response to the request (step 204). The system can select as a candidate docent any available docent that has registered with the system to provide a vicarious tourism session that meets the tourism parameters specified in the received request. The system can determine which docents of the docents registered with the system are available in any of a variety of ways.
  • For example, the system can make the determination based on availability data received from the docents that identifies time periods during which the docents are available to give tours. As another example, the system can make the determination based on which docents are currently logged in to the system or based on data that identifies a current presence status of the docents, i.e., active, idle, busy, or offline.
  • The system provides data identifying the candidate docents to the user device (step 206). For example, the system can provide a map interface for presentation to the user that identifies locations of vicarious tourism sessions provided by the candidate docents. Once the user submits an input selecting a location, the system can provide another user interface through which the user can request a virtual tourism session. The system can rank the available sessions based on, e.g., cost to the user to participate in the session, on user reviews of previous tourism sessions provided by a given docent, e.g., on reviews by the user, reviews by other users, or both, or on whether the docent has expertise in something of interest to the user, and provide the sessions for display in an order according to the ranking Optionally, the user interface can allow the user to sort the identified sessions according to any of a set of criteria.
  • The system receives a user input selecting a candidate docent (step 208) and initiating a vicarious tourism session between the docent and the user of the user device (step 210). In some implementations, if one or more other users select the same candidate docent, e.g., within a predetermined time window of the user, the system can initiate the vicarious tourism session between the docent and multiple users. The vicarious tourism session is a “point of view” session in which a docent wearing a session accessory offers a user of a user device the experience of visiting a particular physical location or point of interest. That is, during the vicarious tourism session, the system provides a video stream of video captured by the session accessory of the docent to one or more user devices for presentation to one or more users. The docent may also include in the video stream video or audio from other sources, e.g., from a video camera with high quality zoom lenses on a sturdy mounting. Optionally, when the video stream includes video from a mounted video camera configured to communicate with the system, the user may be able to control the zoom and the position of the camera by providing an input to the system. Further optionally, depending on the session accessory worn by the docent, the user may be able to control the zoom and the position of the camera or video input device of the session accessory by providing an input to the system.
  • In some implementations, prior to initiating the vicarious tourism session, the system allows the docent the opportunity to agree to participate in the session or for the user and the docent to negotiate the terms of the vicarious tourism session, e.g., by providing a user interface identifying the vicarious tourism and the user for presentation to the user or by establishing a channel of communication between the user and the docent.
  • In some implementations, the docent can provide an itinerary or a route for the session. The system can then provide information identifying the itinerary or route for display to the user, e.g., prior to the user selecting a session. In some implementations, the user may have an option to generate an itinerary or route, e.g., by interacting with a map interface provided by the system, uploading an existing itinerary or route, or by selecting from available itineraries or routes maintained by the system for the location or point or interest. In these implementations, the system may provide the user-created, user-uploaded, or user-selected itinerary or route to the docent for approval before initiating the vicarious tourism session.
  • A user may be able to interact with the docent during the vicarious tourism session. For example, the user may be able to pose questions or make requests to the docent as part of the vicarious tourism session. In particular, the user may be able to give directions to the docent on where to go next, where to look, how fast to go, and so on.
  • However, in many circumstances, the user and the docent may not be able to effectively communicate directly due to not being able to speak the same language, for example. Therefore, in some implementations, the system can receive input from the user and translate it into one of a standardized set of statements that will be understandable by the docent. For example, the system can receive an input from the user that specifies an instruction for the docent, e.g., a user selection of a user interface element that indicates that the user wants the docent to move in a specific direction, a user swipe movement on a touchscreen display of the user device in the specific direction, or a user movement of an input device in the specific direction, and translate the inputs so that they may be understood by the docent. For example, the system can translate the instruction into an audio command in a language that is spoken by the docent, e.g., a language specified in a user profile of the docent or a most-commonly spoken language where the docent is located. In some implementations, the system can generate one of a pre-defined set of audio signals that correspond to the received instruction, e.g., a hum, a bang, a squeak, and so on. Depending on the features available on the session accessory or other user devices possessed by the docent, other ways of signaling a docent may be possible, e.g., by causing the session accessory or a mobile device wirelessly connected to the session accessory to vibrate, by causing a portion of the session accessory to move and contact the docent in a pre-determined fashion, i.e., by causing a pre-determined touch signal to be applied to the docent, or by generating a signal that is visible to the docent while wearing the session accessory, e.g., if the session accessory includes a display.
  • Optionally, the system can overlay various kinds of information over the video stream that is presented to the user. For example, the system can overlay a map that displays the current location of the docent, e.g., obtained by the system from the wearable production device or from a different device on the docent's person. The map may also display the proposed route for the session. As another example, the system can, based on the location information or by applying object recognition techniques to the video being captured by the session accessory, detect points of interest or other geographic entities near the location of the docent and overlay information about the points of interest or other geographic entities in proximity to the docent. As another example, the system can overlay information obtained from a social network account of the user, e.g., images of past visits to the location by the user or by other users in the user's social network, status updates by users in the user's social network, and so on.
  • Embodiments of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media.
  • The operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources. The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, e.g., web services, distributed computing and grid computing infrastructures.
  • A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a smart phone, a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, and a wearable computer device, to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, magnetic disks, and the like. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input and output.
  • While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
  • Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
  • Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.

Claims (20)

What is claimed is:
1. A method performed by one or more computers, the method comprising:
receiving a request from a user of a user device to participate in a vicarious tourism session;
selecting candidate docents, wherein a docent is a user who has registered to participate in vicarious tourism sessions that are relevant to a particular geographic location;
providing data identifying the candidate docents for presentation to the user device;
receiving a user input selecting a candidate docent; and
initiating a vicarious tourism session between the selected docent and the user, wherein initiating the vicarious tourism session comprising providing a video feed of video captured from a session accessory worn by the selected docent to the user device for presentation to the user.
2. The method of claim 1, wherein the request specifies one or more tourism parameters, and wherein selecting candidate docents comprises selecting as candidate docents available docents who have registered to provide vicarious tourism sessions matching the tourism parameters specified in the request.
3. The method of claim 2, wherein selecting candidate docents further comprises:
identifying available docents.
4. The method of claim 1, further comprising:
ranking the candidate docents based on a respective cost to the user to participate in a vicarious tourism session with each candidate docent or on user reviews of previous tourism sessions with each candidate docent.
5. The method of claim 1, wherein providing data identifying the candidate docents comprises:
providing a map interface for presentation to the user, wherein the map interface identifies locations where candidate docents are available to participate in interactive tourism sessions.
6. The method of claim 1, further comprising:
overlaying relevant information over the video feed for presentation to the user, wherein the relevant information is relevant to the geographic location of the selected docent.
7. The method of claim 1, further comprising:
receiving an input from the user;
determining that the input is an instruction for the selected docent; and
translating the input into one of a standardized set of commands that is understandable by the selected docent.
8. The method of claim 7, wherein the standardized set of commands includes one or more of an audio signal, a touch signal, or a visual signal.
9. A system comprising one or more computers and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform operations comprising:
receiving a request from a user of a user device to participate in a vicarious tourism session;
selecting candidate docents, wherein a docent is a user who has registered to participate in vicarious tourism sessions that are relevant to a particular geographic location;
providing data identifying the candidate docents for presentation to the user device;
receiving a user input selecting a candidate docent; and
initiating a vicarious tourism session between the selected docent and the user, wherein initiating the vicarious tourism session comprising providing a video feed of video captured from a session accessory worn by the selected docent to the user device for presentation to the user.
10. The system of claim 9, wherein the request specifies one or more tourism parameters, and wherein selecting candidate docents comprises selecting as candidate docents available docents who have registered to provide vicarious tourism sessions matching the tourism parameters specified in the request.
11. The system of claim 10, wherein selecting candidate docents further comprises:
identifying available docents.
12. The system of claim 9, the operations further comprising:
ranking the candidate docents based on a respective cost to the user to participate in a vicarious tourism session with each candidate docent or on user reviews of previous tourism sessions with each candidate docent.
13. The system of claim 9, wherein providing data identifying the candidate docents comprises:
providing a map interface for presentation to the user, wherein the map interface identifies locations where candidate docents are available to participate in interactive tourism sessions.
14. The system of claim 9, the operations further comprising:
overlaying relevant information over the video feed for presentation to the user, wherein the relevant information is relevant to the geographic location of the selected docent.
15. The system of claim 9, the operations further comprising:
receiving an input from the user;
determining that the input is an instruction for the selected docent; and
translating the input into one of a standardized set of commands that is understandable by the selected docent.
16. The system of claim 15, wherein the standardized set of commands includes one or more of an audio signal, a touch signal, or a visual signal.
17. A computer storage medium encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform operations comprising:
receiving a request from a user of a user device to participate in a vicarious tourism session;
selecting candidate docents, wherein a docent is a user who has registered to participate in vicarious tourism sessions that are relevant to a particular geographic location;
providing data identifying the candidate docents for presentation to the user device;
receiving a user input selecting a candidate docent; and
initiating a vicarious tourism session between the selected docent and the user, wherein initiating the vicarious tourism session comprising providing a video feed of video captured from a session accessory worn by the selected docent to the user device for presentation to the user.
18. The computer storage medium of claim 17, wherein the request specifies one or more tourism parameters, and wherein selecting candidate docents comprises selecting as candidate docents available docents who have registered to provide vicarious tourism sessions matching the tourism parameters specified in the request.
19. The computer storage medium of claim 9, wherein providing data identifying the candidate docents comprises:
providing a map interface for presentation to the user, wherein the map interface identifies locations where candidate docents are available to participate in interactive tourism sessions.
20. The computer storage medium of claim 1, the operations further comprising:
receiving an input from the user;
determining that the input is an instruction for the selected docent; and
translating the input into one of a standardized set of commands that is understandable by the selected docent.
US14/141,194 2013-03-14 2013-12-26 Providing vicarious tourism sessions Abandoned US20150172607A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/141,194 US20150172607A1 (en) 2013-03-14 2013-12-26 Providing vicarious tourism sessions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361785085P 2013-03-14 2013-03-14
US14/141,194 US20150172607A1 (en) 2013-03-14 2013-12-26 Providing vicarious tourism sessions

Publications (1)

Publication Number Publication Date
US20150172607A1 true US20150172607A1 (en) 2015-06-18

Family

ID=51534399

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/141,194 Abandoned US20150172607A1 (en) 2013-03-14 2013-12-26 Providing vicarious tourism sessions
US14/141,208 Active 2034-07-28 US9661282B2 (en) 2013-03-14 2013-12-26 Providing local expert sessions

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/141,208 Active 2034-07-28 US9661282B2 (en) 2013-03-14 2013-12-26 Providing local expert sessions

Country Status (2)

Country Link
US (2) US20150172607A1 (en)
WO (1) WO2014159133A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9826013B2 (en) 2015-03-19 2017-11-21 Action Streamer, LLC Method and apparatus for an interchangeable wireless media streaming device
US10298699B2 (en) * 2016-09-08 2019-05-21 Microsoft Technology Licensing, Llc Physical location determination of internal network components
US20180300787A1 (en) * 2017-04-18 2018-10-18 Engage, Inc. System and method for synchronous peer-to-peer communication based on relevance

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050086612A1 (en) * 2003-07-25 2005-04-21 David Gettman Graphical user interface for an information display system
US20070067225A1 (en) * 2005-09-21 2007-03-22 Travelocity.Com Lp. Systems, methods, and computer program products for determining rankings of product providers displayed via a product source system
US20080243473A1 (en) * 2007-03-29 2008-10-02 Microsoft Corporation Language translation of visual and audio input
US20080271072A1 (en) * 2007-04-30 2008-10-30 David Rothschild Systems and methods for providing live, remote location experiences
US20080319773A1 (en) * 2007-06-21 2008-12-25 Microsoft Corporation Personalized travel guide
US20080320159A1 (en) * 2007-06-25 2008-12-25 University Of Southern California (For Inventor Michael Naimark) Source-Based Alert When Streaming Media of Live Event on Computer Network is of Current Interest and Related Feedback
US20090019176A1 (en) * 2007-07-13 2009-01-15 Jeff Debrosse Live Video Collection And Distribution System and Method
US20090216577A1 (en) * 2008-02-22 2009-08-27 Killebrew Todd F User-generated Review System
US20100250231A1 (en) * 2009-03-07 2010-09-30 Voice Muffler Corporation Mouthpiece with sound reducer to enhance language translation
US20110029352A1 (en) * 2009-07-31 2011-02-03 Microsoft Corporation Brokering system for location-based tasks
US20120035908A1 (en) * 2010-08-05 2012-02-09 Google Inc. Translating Languages
US20120054058A1 (en) * 2010-07-13 2012-03-01 Bemyeye S.R.L. Method of matching asks and bids of tailored videos
US20130231128A1 (en) * 2012-03-02 2013-09-05 Constantinos Antonios Terzidis System for trading and/or exchanging information about geographical locations
US20140229287A1 (en) * 2011-10-18 2014-08-14 Tour Pal Ltd System and method for providing interactive tour guidance
US20140250180A1 (en) * 2013-03-04 2014-09-04 Erick Tseng Ranking Videos for a User
US20140337173A1 (en) * 2010-06-22 2014-11-13 Nokia Corporation Method and apparatus for managing location-based transactions

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6298308B1 (en) * 1999-05-20 2001-10-02 Reid Asset Management Company Diagnostic network with automated proactive local experts
IL137305A (en) * 2000-07-13 2005-08-31 Clicksoftware Technologies Ld Method and system for sharing knowledge
US7383190B1 (en) * 2000-09-15 2008-06-03 American Express Travel Related Services Company, Inc. Systems, methods and computer program products for receiving and responding to customer requests for travel related information
US20030140037A1 (en) * 2002-01-23 2003-07-24 Kenneth Deh-Lee Dynamic knowledge expert retrieval system
US6894617B2 (en) * 2002-05-04 2005-05-17 Richman Technology Corporation Human guard enhancing multiple site integrated security system
US20040008157A1 (en) * 2002-06-26 2004-01-15 Brubaker Curtis M. Cap-mounted monocular video/audio display
US6753899B2 (en) 2002-09-03 2004-06-22 Audisoft Method and apparatus for telepresence
US20070206086A1 (en) * 2005-01-14 2007-09-06 Experticity, Inc. On-line expert provision system and method
US20070005698A1 (en) * 2005-06-29 2007-01-04 Manish Kumar Method and apparatuses for locating an expert during a collaboration session
US7523082B2 (en) * 2006-05-08 2009-04-21 Aspect Software Inc Escalating online expert help
US20080294694A1 (en) * 2007-05-24 2008-11-27 Videoclix Technologies Inc. Method, apparatus, system, medium, and signals for producing interactive video content
US20080306682A1 (en) * 2007-06-05 2008-12-11 General Motors Corporation System serving a remotely accessible page and method for requesting navigation related information
US20090033736A1 (en) * 2007-08-01 2009-02-05 John Thomason Wireless Video Audio Data Remote System
US8757831B2 (en) * 2007-12-18 2014-06-24 Michael Waters Headgear having an electrical device and power source mounted thereto
US8805844B2 (en) * 2008-08-04 2014-08-12 Liveperson, Inc. Expert search
US8751559B2 (en) 2008-09-16 2014-06-10 Microsoft Corporation Balanced routing of questions to experts
US20100287685A1 (en) * 2009-05-13 2010-11-18 Randy Peterson Universal camera mount for baseball cap
US8266098B2 (en) * 2009-11-18 2012-09-11 International Business Machines Corporation Ranking expert responses and finding experts based on rank
US8832093B2 (en) * 2010-08-18 2014-09-09 Facebook, Inc. Dynamic place visibility in geo-social networking system
US20120077437A1 (en) * 2010-09-27 2012-03-29 Sony Ericsson Mobile Communications Ab Navigation Using a Headset Having an Integrated Sensor
US20120110064A1 (en) * 2010-11-01 2012-05-03 Google Inc. Content sharing interface for sharing content in social networks
US8676937B2 (en) * 2011-05-12 2014-03-18 Jeffrey Alan Rapaport Social-topical adaptive networking (STAN) system allowing for group based contextual transaction offers and acceptances and hot topic watchdogging
US9529910B2 (en) * 2011-07-13 2016-12-27 Jean Alexandera Munemann Systems and methods for an expert-informed information acquisition engine utilizing an adaptive torrent-based heterogeneous network solution
US9110894B2 (en) * 2011-12-16 2015-08-18 Yahooo! Inc. Systems and methods for determining related places
WO2013101438A1 (en) * 2011-12-29 2013-07-04 Kopin Corporation Wireless hands-free computing head mounted video eyewear for local/remote diagnosis and repair
US20130293550A1 (en) * 2012-05-07 2013-11-07 Business Intelligence Solutions Safe B.V. Method and system for zoom animation
US8611930B2 (en) * 2012-05-09 2013-12-17 Apple Inc. Selecting informative presentations based on navigation cues and user intent
US20130339868A1 (en) * 2012-05-30 2013-12-19 Hearts On Fire Company, Llc Social network
US8866878B2 (en) * 2012-10-15 2014-10-21 Bank Of America Corporation Representative pre-selection for customer service video conference

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050086612A1 (en) * 2003-07-25 2005-04-21 David Gettman Graphical user interface for an information display system
US20070067225A1 (en) * 2005-09-21 2007-03-22 Travelocity.Com Lp. Systems, methods, and computer program products for determining rankings of product providers displayed via a product source system
US20080243473A1 (en) * 2007-03-29 2008-10-02 Microsoft Corporation Language translation of visual and audio input
US20080271072A1 (en) * 2007-04-30 2008-10-30 David Rothschild Systems and methods for providing live, remote location experiences
US20080319773A1 (en) * 2007-06-21 2008-12-25 Microsoft Corporation Personalized travel guide
US20080320159A1 (en) * 2007-06-25 2008-12-25 University Of Southern California (For Inventor Michael Naimark) Source-Based Alert When Streaming Media of Live Event on Computer Network is of Current Interest and Related Feedback
US20090019176A1 (en) * 2007-07-13 2009-01-15 Jeff Debrosse Live Video Collection And Distribution System and Method
US20090216577A1 (en) * 2008-02-22 2009-08-27 Killebrew Todd F User-generated Review System
US20100250231A1 (en) * 2009-03-07 2010-09-30 Voice Muffler Corporation Mouthpiece with sound reducer to enhance language translation
US20110029352A1 (en) * 2009-07-31 2011-02-03 Microsoft Corporation Brokering system for location-based tasks
US20140337173A1 (en) * 2010-06-22 2014-11-13 Nokia Corporation Method and apparatus for managing location-based transactions
US20120054058A1 (en) * 2010-07-13 2012-03-01 Bemyeye S.R.L. Method of matching asks and bids of tailored videos
US20120035908A1 (en) * 2010-08-05 2012-02-09 Google Inc. Translating Languages
US20140229287A1 (en) * 2011-10-18 2014-08-14 Tour Pal Ltd System and method for providing interactive tour guidance
US20130231128A1 (en) * 2012-03-02 2013-09-05 Constantinos Antonios Terzidis System for trading and/or exchanging information about geographical locations
US20140250180A1 (en) * 2013-03-04 2014-09-04 Erick Tseng Ranking Videos for a User

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Koncewicz, Minimap Rotation, 03 June 2012, Significant Bits, http://www.significant-bits.com/minimap-rotation *

Also Published As

Publication number Publication date
WO2014159133A1 (en) 2014-10-02
US9661282B2 (en) 2017-05-23
US20140282043A1 (en) 2014-09-18

Similar Documents

Publication Publication Date Title
US11763367B2 (en) System to process data related to user interactions or feedback while user experiences product
US11206373B2 (en) Method and system for providing mixed reality service
KR102423588B1 (en) Information providing method and device
KR102057592B1 (en) Gallery of messages with a shared interest
US10182095B2 (en) Method and system for video call using two-way communication of visual or auditory effect
US20180374268A1 (en) Interactive mixed reality system for a real-world event
EP2880858B1 (en) Using an avatar in a videoconferencing system
US11095947B2 (en) System for sharing user-generated content
CN103813126B (en) It carries out providing the method and its electronic device of user interest information when video calling
JP2020520206A (en) Wearable multimedia device and cloud computing platform with application ecosystem
CN107534784A (en) Server, user terminal apparatus and its control method
TWI594203B (en) Systems, machine readable storage mediums and methods for collaborative media gathering
US11638060B2 (en) Electronic apparatus and control method thereof
CN104077026A (en) Device and method for displaying execution result of application
US20180338164A1 (en) Proxies for live events
US20170193605A1 (en) System and method for insurance claim assessment
US10762902B2 (en) Method and apparatus for synthesizing adaptive data visualizations
CN113806306B (en) Media file processing method, device, equipment, readable storage medium and product
JP2019537397A (en) Effect sharing method and system for video
US20180268496A1 (en) Photo booth system
US20150172607A1 (en) Providing vicarious tourism sessions
US20160346494A1 (en) Nasal mask with internal structuring for use with ventilation and positive air pressure systems
CN107077660A (en) Accessibility feature in content is shared
CN112041787A (en) Electronic device for outputting response to user input using application and method of operating the same
Tjondronegoro Tools for mobile multimedia programming and development

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MANBER, UDI;REEL/FRAME:032989/0377

Effective date: 20140528

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044144/0001

Effective date: 20170929