Invented by Carlos G. Perez, Vidya Srinivasan, Colton B. MARSHALL, Aniket HANDA, Harold Anthony MARTINEZ MOLINA, Microsoft Technology Licensing LLC
The gaming industry has been one of the early adopters of this technology. The system and method for displaying an object in a virtual or semi-virtual environment based on user characteristics has been used to create personalized gaming experiences. For example, a game can be designed to adapt to the user’s skill level, preferences, and playing style. This technology has also been used to create virtual reality games that provide a more immersive experience for the user.
The education industry has also been quick to adopt this technology. The system and method for displaying an object in a virtual or semi-virtual environment based on user characteristics has been used to create personalized learning experiences. For example, a student can be provided with a virtual environment that adapts to their learning style and pace. This technology has also been used to create virtual classrooms that provide a more interactive and engaging learning experience.
The healthcare industry has also been exploring the use of this technology. The system and method for displaying an object in a virtual or semi-virtual environment based on user characteristics has been used to create personalized healthcare experiences. For example, a patient can be provided with a virtual environment that adapts to their medical condition and needs. This technology has also been used to create virtual therapy sessions that provide a more comfortable and convenient experience for the patient.
The retail industry has also been exploring the use of this technology. The system and method for displaying an object in a virtual or semi-virtual environment based on user characteristics has been used to create personalized shopping experiences. For example, a customer can be provided with a virtual environment that adapts to their preferences and needs. This technology has also been used to create virtual showrooms that provide a more immersive and interactive shopping experience.
In conclusion, the market for System and method for displaying an object in a virtual or semi-virtual environment based on user characteristics is rapidly growing. This technology has been adopted by various industries such as gaming, education, healthcare, and retail. The system and method for displaying an object in a virtual or semi-virtual environment based on user characteristics has provided users with personalized and immersive experiences. As this technology continues to evolve, we can expect to see more industries adopt it and provide users with even more personalized experiences.
The Microsoft Technology Licensing LLC invention works as follows
A method, system and computer program for providing a virtual object in a virtual or semi-virtual setting, based on the characteristic associated with the users. In one embodiment, the system includes at least one processor and a memory that stores instructions which, when executed by said at least processor, performs a set operations including determining the characteristic of the user within the virtual or semivirtual environment in relation to a predetermined location in the environment and providing a virtual item based on this characteristic.
Background for System and method for displaying an object in a virtual or semi-virtual environment based on user characteristics
Publishing websites have been an important way to share and consume online information. There are a few services that make web site creation more accessible. There are no services that can solve the problem of building sites which fully utilize 3D content. There is an increasing need for tools or services that facilitate the consumption and creation of 3D content. It can be difficult to move around a virtual environment using a virtual reality headset. Users may not know how to use or interact with a virtual world in some cases. “Automatically moving the user around the virtual world is difficult and can cause motion sickness or discomfort to the user.
A 3D experience offers a greater level of immersion than a 2D experience. In the virtual or semi-virtual realms, such as mixed reality, augmented reality, and augmented reality, 3D objects can move, animate and change their form. In the past, users would need to design carefully all of the possible states and behaviors associated with 3D objects. Most users lack the expertise required to create correct 3D views, interfaces and operate them. Users often lack the expertise or ability to be able to control objects in these worlds. This is especially true when they are displaced from their original object.
The embodiments described here are based on these general considerations and others. Although relatively specific problems were discussed, the embodiments are not limited to the solutions of the problems that have been identified in the background.
The present application is a general description of virtual or semivirtual systems and, more specifically, it is a description of a virtual object that can be provided in a semi-virtual or virtual environment based on the characteristic associated with an individual user.
In one embodiment, the system comprises at least one computer processor, a memory storing instructions that, when executed by the at least one computer processor, perform a set of operations comprising determining a characteristic associated with a user in a virtual or semi-virtual setting, with respect to a predetermined reference location within the environment, and providing a virtual object based on the characteristic. In one embodiment, the system includes at least one processor and a memory that stores instructions which, when executed by said processor, performs a set operations including determining a characteristic of a user within a semi-virtual or virtual environment in relation to a predetermined location in the virtual environment and providing a corresponding virtual object.
In one embodiment, the system includes a head-mounted display (HMD), where the at least one processor is connected to the display. The providing also involves displaying the virtual object in the display of the HMD.
In another aspect of the invention, the characteristic is the distance from the referenced predetermined location. The providing comprises providing the virtual object according to a first form predetermined corresponding to that distance. The virtual object can be a representation of an item at the referenced location and include, for instance, a user interface which allows the user to control the object regardless of their location. In one embodiment, the user may be located at a locomotion mark where the user can transport their view to the view associated with the locomotion make.
In a further aspect of this invention, the distance is the first distance and the set operations also includes determining a distance greater than the first from the predetermined position of reference, as well as providing the virtual object to the user in a predetermined second form.
The second predetermined virtual object form can be provided by determining the features that are most important in the virtual item and highlighting them in a prominent manner.
Accordingly to an example aspect of the present invention, the virtual or semivirtual environment can be a virtual reality, augmented reality, or mixed reality environment. The user may also be a virtual user, as well as a non-virtual one. The virtual object can be a representation of the object at the reference location. The virtual object could be, for example, a virtual interface that allows the user to control the media player regardless of their location. In one example, the virtual object could be a floating virtual interface.
This summary is intended to provide a simplified version of a number of concepts that are described in greater detail in the Detailed description. This summary does not aim to identify the key features or essential elements of the claimed material, nor to limit its scope.
In the detailed description that follows, there are references to the accompanying illustrations, which form part of this document, and where specific embodiments or example are illustrated. The aspects can be combined or used in other ways, and structural modifications may be made, without straying from the disclosure. Embodiments can be implemented as devices, methods or systems. In this way, embodiments can be implemented as a software or hardware implementation. The detailed description that follows is not meant to be taken as a limitation, and the scope is determined by the claims and their equivalents.
The present technology is concerned with providing an object (such as a virtual item) in a space such as, for instance, a 3D environment, based on a number of characteristics that are associated with the user. The 3D environment can be a virtual space (VR), a semi-virtual one (augmented reality, mixed reality, etc.) or even a space that is not virtual. In one example, the virtual object may be a 3D object (although it can also be a 2D object).
In greater detail, the technology of the present involves generating, manipulating or controlling a virtual item, such a virtual user’s interface, sign, Billboard, character or text, symbol or shape, or other virtual object based on at least one characteristic associated with the user. The characteristics can include, for example, the user?s position, orientation and location, as well as the direction of their gaze or line of sight, their field of vision, angle or distance, etc., in relation to a reference point. The virtual object can be used as the predetermined reference point, or it could be a non-virtual (or semi-virtual) object that represents the virtual object within the environment.
As non-limiting examples, a virtual item, or component of a digital object, like a user interface, can be designed to have features, structures, and/or sizes that are dependent on the distance of the user from a reference location. As an example, a user’s distance from a predetermined reference point may increase the size of the virtual object so that they can interact and/or perceive the object. The size could also decrease (e.g. towards a standard size) when the user gets closer to the location. The size and number interface components may change dynamically depending on the size of the virtual objects displayed. This allows the user to perceive the object clearly regardless of his or her current location. The object remains recognizable or usable for the user and the user can interact with it without having to move or transport.
Of Course, the above illustration is only illustrative and does not limit the scope of technology described in this document. In other examples, depending upon the applicable design criteria, an object’s size can be reduced as the user moves away from the predetermined location of reference, or increased as the user gets closer.
In another example, generating, manipulating or controlling certain parts of an object can be done based on the characteristics of the user. Some parts of the object, such as text, buttons, segments, lines, icons, data or information, boxes, and so on, can be altered, manipulated, or controlled based on certain characteristics associated with the user. The size of an object can be altered in one or more directions. It can also be made to appear more or less prominent, moved, repositioned or blurred. The user is still able to interact with the parts of the object they deem important, regardless of their location or distance. The object can be provided in real-time as the characteristics of the user change, or at predetermined intervals in time, space, and/or based on predetermined variations of user characteristics. For example, predetermined variations of user characteristics can be achieved by controlling or manipulating the object to a certain degree when a user or user’s line of vision or body part is displaced from a reference location in a linear or angular manner. Other embodiments of the present invention provide when the user is at locomotion marks in the applicable environment. The manner in which providing occurs may depend on the location and/or distance from the predetermined reference point.
Before describing in greater detail the way in which virtual objects can be provided, it should be noted that an example aspect herein could involve a user wearing a HMD that allows the user to view a virtual world or environment. The user may want to interact with the objects while viewing the environment through the head-mounted display (HMD). The present technology can display locomotion markers for the user to select. Once the user has selected the locomotion mark, their view can be transferred to the view of the locomotion maker. For example, the user may select a locomotion maker to view an object in a virtual or semi-virtual space from a specific position and orientation. A locomotion marker or markers can also be linked to content. When 3D objects are created or modified within an environment, it is possible to associate a locomotion mark with that content. This marker will place the user in the optimal or desired position and orientation for viewing the 3D object. When the user’s gaze is focused on or near a 3D object in such examples, a locomotion mark may be displayed which is oriented properly to view the 3D object. The locomotion indicator displayed can be chosen to teleport you to the best position and orientation to view the 3D objects within the environment.
The locomotion marker can be selected by a handheld control unit, a smart-phone, other controls that are operatively linked to the HMD or based on the gaze or view of the person using the HMD. The locomotion markers can also be selected using any method known to those with skill in the field.
Those with skill in the arts will understand that in a semi-virtual or virtual world, the view of the user corresponds to the position and orientation of the virtual camera within the virtual or virtual environment. The user’s view of the world changes when the virtual camera is repositioned or oriented in the environment. “When an HMD is used to view the environment by the user, the virtual camera’s orientation is usually tied to the direction of the head worn by the user.
The virtual camera orientation in the virtual environment determines the orientation of the user?s view or gaze. The HMD’s position in the real-world controls the virtual camera. The virtual camera’s orientation can be determined by a global coordinate system for the virtual world. The virtual world could, for example, use a 3D Cartesian Coordinate System with a predefined origin. The virtual camera can be considered as an object in the virtual world. Its orientation is defined by Euler angles relative to the global coordinate system. “Those skilled in the art can also appreciate the fact that the technology used herein allows for different techniques to control or represent rotation of a digital object, including the use of rotation matrixes, quadrillions, or other techniques.
Click here to view the patent on Google Patents.