Lucasfilm Patents

Advertisement
Lucasfilm Ltd. produced the Star Wars and Indiana Jones motion pictures. The company was acquired by the Walt Disney Company in 2012.
Lucasfilm Patents by Type- Lucasfilm Patents Granted: Lucasfilm patents that have been granted by the United States Patent and Trademark Office (USPTO).
- Lucasfilm Patent Applications: Lucasfilm patent applications that are pending before the United States Patent and Trademark Office (USPTO).
-
Patent number: 11170571Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.Type: GrantFiled: November 15, 2019Date of Patent: November 9, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Matthew Cong, Ronald Fedkiw, Lana Lan
-
Publication number: 20210342971Abstract: A method of content production includes generating a survey of a performance area that includes a point cloud representing a first physical object, in a survey graph hierarchy, constraining the point cloud and a taking camera coordinate system as child nodes of an origin of a survey coordinate system, obtaining virtual content including a first virtual object that corresponds to the first physical object, applying a transformation to the origin of the survey coordinate system so that at least a portion of the point cloud that represents the first physical object is substantially aligned with a portion of the virtual content that represents the first virtual object, displaying the first virtual object on one or more displays from a perspective of the taking camera, capturing, using the taking camera, one or more images of the performance area, and generating content based on the one or more images.Type: ApplicationFiled: April 13, 2021Publication date: November 4, 2021Applicant: Lucasfilm Entertainment Company Ltd.Inventors: Douglas G. Watkins, Paige M. Warner, Dacklin R. Young
-
Patent number: 11145125Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: September 13, 2018Date of Patent: October 12, 2021Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Roger Cordes, David Brickhill
-
Patent number: 11132837Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: GrantFiled: November 6, 2019Date of Patent: September 28, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger Cordes, Nicholas Rasmussen, Kevin Wooley, Rachel Rose
-
Patent number: 11132838Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: GrantFiled: November 6, 2019Date of Patent: September 28, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger Cordes, Richard Bluff, Lutz Latta
-
Patent number: 11128984Abstract: A method may include causing first content to be displayed on a display device; causing second content to be rendered irrespective of a location of a mobile device relative to the display device; and causing the second content to be displayed on the mobile device such that the second content is layered over the first content. When the second content has moved a predetermined distance from the screen, The method may also include causing the second content to be rendered based on the location of the mobile device relative to the display device; and causing the second content to be displayed on the mobile device such that the second content is layered over the first content.Type: GrantFiled: October 25, 2019Date of Patent: September 21, 2021Assignee: LUCASFILM ENIERTAINMENT COMPANY LTD.Inventors: John Gaeta, Michael Koperwas, Nicholas Rasmussen
-
Patent number: 11113885Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: September 13, 2018Date of Patent: September 7, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, David Brickhill
-
Patent number: 11107195Abstract: An immersive content production system may capture a plurality of images of a physical object in a performance area using a taking camera. The system may determine an orientation and a velocity of the taking camera with respect to the physical object in the performance area. A user may select a first amount of motion blur exhibited by the images of the physical object based on a desired motion effects. The system may determine a correction to apply to a virtual object based at least in part on the orientation and the velocity of the taking camera and the desired motion blur effect. The system may also detect the distance from the taking camera to a physical object and the taking camera to the virtual display. The system may use these distances to generate a corrected circle of confusion for the virtual images on the display.Type: GrantFiled: August 21, 2020Date of Patent: August 31, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, Lutz Latta
-
Patent number: 11099654Abstract: A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated with another user to a new location within the VR environment. In some examples, the user may be enabled to use the computing device to control a virtual camera within the VR environment and have various information regarding one or more aspects of the virtual camera displayed in the view of the VR environment presented to the user.Type: GrantFiled: April 17, 2020Date of Patent: August 24, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Darby Johnston, Ian Wakelin
-
Patent number: 11087738Abstract: Implementations of the disclosure describe systems and methods that leverage machine learning to automate the process of creating music and effects mixes from original sound mixes including domestic dialogue. In some implementations, a method includes: receiving a sound mix including human dialogue; extracting metadata from the sound mix, where the extracted metadata categorizes the sound mix; extracting content feature data from the sound mix, the extracted content feature data including an identification of the human dialogue and instances or times the human dialogue occurs within the sound mix; automatically calculating, with a trained model, content feature data of a music and effects (M&E;) sound mix using at least the extracted metadata and the extracted content feature data of the sound mix; and deriving the M&E; sound mix using at least the calculated content feature data.Type: GrantFiled: June 11, 2019Date of Patent: August 10, 2021Assignee: Lucasfilm Entertainment Company Ltd. LLCInventors: Scott Levine, Stephen Morris
-
Patent number: 11069135Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes receiving a plate with an image of the subject's facial expression, a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters, a model of a camera rig used to capture the plate, and a virtual lighting model that estimates lighting conditions when the image on the plate was captured.Type: GrantFiled: November 12, 2019Date of Patent: July 20, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Stéphane Grabli, Michael Bao, Per Karefelt, Adam Ferrall-Nunge, Jeffery Yost, Ronald Fedkiw, Cary Phillips, Pablo Helman, Leandro Estebecorena
-
Patent number: 11049332Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; andType: GrantFiled: March 3, 2020Date of Patent: June 29, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Matthew Loper, Stéphane Grabli, Kiran Bhat
-
Patent number: 11039083Abstract: Embodiments can enable motion capture cameras to be optimally placed in a set. For achieving this, a virtual set can be generated based on information regarding the set. Movement of a virtual actor or a virtual object may be controlled in the virtual set to simulate movement of the corresponding real actor and real object in the set. Based on such movement, camera aspects and obstructions in the set can be determined. Based on this determination, indication information indicating whether regions in the set may be viewable by one or more cameras placed in the physical set may be obtained. Based on the indication information, it can be determined an optimal placement of the motion capture cameras in the set. In some embodiments, an interface may be provided to show whether the markers attached to the actor can be captured by the motion capture cameras placed in a specific configuration.Type: GrantFiled: January 24, 2017Date of Patent: June 15, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: John Levin, Mincho Marinov, Brian Cantwell
-
Patent number: 11030810Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: September 13, 2018Date of Patent: June 8, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Roger Cordes, David Brickhill
-
Publication number: 20210150810Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.Type: ApplicationFiled: November 15, 2019Publication date: May 20, 2021Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: Matthew Cong, Ronald Fedkiw, Lana Lan
-
Patent number: 10964083Abstract: A system includes a computing device that includes a memory configured to store instructions. The system also includes a processor configured to execute the instructions to perform a method that includes receiving multiple representations of one or more expressions of an object. Each of the representations includes position information attained from one or more images of the object. The method also includes producing an animation model from one or more groups of controls that respectively define each of the one or more expressions of the object as provided by the multiple representations. Each control of each group of controls has an adjustable value that defines the geometry of at least one shape of a portion of the respective expression of the object. Producing the animation model includes producing one or more corrective shapes if the animation model is incapable of accurately presenting the one or more expressions of the object as provided by the multiple representations.Type: GrantFiled: April 10, 2019Date of Patent: March 30, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Kiran S. Bhat, Michael Koperwas, Rachel M. Rose, Jung-Seung Hong, Frederic P. Pighin, Christopher David Twigg, Cary Phillips, Steve Sullivan
-
Patent number: 10928995Abstract: Systems, devices, and methods are disclosed for UV packing. The system includes a non-transitory computer-readable medium operatively coupled to processors. The non-transitory computer-readable medium stores instructions that, when executed, cause the processors to perform a number of operations. One operation is to present a packing map using a graphical user interface including a selection tool. Another operation is to present a first set of one or more target objects using the graphical user interface. Individual ones of the first set include one or more features. One operation is to receive a first user input. Another operation is to, based on the first user input and the one or more features corresponding to the individual ones of the first set, pack the first set into a packing map.Type: GrantFiled: August 28, 2018Date of Patent: February 23, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Colette Mullenhoff, Benjamin Neall
-
Publication number: 20200394999Abstract: Implementations of the disclosure describe systems and methods that leverage machine learning to automate the process of creating music and effects mixes from original sound mixes including domestic dialogue. In some implementations, a method includes: receiving a sound mix including human dialogue; extracting metadata from the sound mix, where the extracted metadata categorizes the sound mix; extracting content feature data from the sound mix, the extracted content feature data including an identification of the human dialogue and instances or times the human dialogue occurs within the sound mix; automatically calculating, with a trained model, content feature data of a music and effects (M&E;) sound mix using at least the extracted metadata and the extracted content feature data of the sound mix; and deriving the M&E; sound mix using at least the calculated content feature data.Type: ApplicationFiled: June 11, 2019Publication date: December 17, 2020Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: Scott Levine, Stephen Morris
-
Patent number: 10846920Abstract: Implementations of the disclosure are directed to generating shadows in the physical world that correspond to virtual objects displayed on MR displays. In some implementations, a method includes: synchronously presenting a version of a scene on each of a MR display system and a projector display system, where during presentation: the MR display system displays a virtual object overlaid over a view of a physical environment; and a projector of the projector display system creates a shadow on a surface in the physical environment, the created shadow corresponding to the virtual object displayed by the MR display. In some implementations, the method includes: loading in a memory of the MR display system, a first version of the scene including the virtual object; and loading in a memory of the projector display system a second version of the scene including a virtual surface onto which the virtual object casts a shadow.Type: GrantFiled: February 20, 2019Date of Patent: November 24, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Michael Koperwas, Lutz Latta
-
Patent number: 10825225Abstract: Some implementations of the disclosure are directed to a pipeline that enables real time engines such as gaming engines to leverage high quality simulations generated offline via film grade simulation systems. In one implementation, a method includes: obtaining simulation data and skeletal mesh data of a character, the simulation data and skeletal mesh data including the character in the same rest pose; importing the skeletal mesh data into a real-time rendering engine; and using at least the simulation data and the imported skeletal mesh data to derive from the simulation data a transformed simulation vertex cache that is usable by the real-time rendering engine during runtime to be skinned in place of the rest pose.Type: GrantFiled: March 20, 2019Date of Patent: November 3, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Ronald Radeztsky, Michael Koperwas
-
Patent number: 10812693Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.Type: GrantFiled: August 13, 2018Date of Patent: October 20, 2020Assignee: LucasFilm Entertainment Company Ltd.Inventors: Leandro Estebecorena, John Knoll, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
-
Patent number: 10796489Abstract: An immersive content presentation system can capture the motion or position of a performer in a real-world environment. A game engine can be modified to receive the position or motion of the performer and identify predetermined gestures or positions that can be used to trigger actions in a 3-D virtual environment, such as generating a digital effect, transitioning virtual assets through an animation graph, adding new objects, and so forth. The use of the 3-D environment can be rendered and composited views can be generated. Information for constructing the composited views can be streamed to numerous display devices in many different physical locations using a customized communication protocol. Multiple real-world performers can interact with virtual objects through the game engine in a shared mixed-reality experience.Type: GrantFiled: September 13, 2018Date of Patent: October 6, 2020Assignee: Lucasfilm Entertainment Company Ltd.Inventors: Roger Cordes, David Brickhill
-
Publication number: 20200288050Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.Type: ApplicationFiled: May 20, 2020Publication date: September 10, 2020Applicant: Lucasfilm Entertainment Company Ltd.Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
-
Publication number: 20200286284Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes receiving a plate with an image of the subject's facial expression, a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters, a model of a camera rig used to capture the plate, and a virtual lighting model that estimates lighting conditions when the image on the plate was captured.Type: ApplicationFiled: November 12, 2019Publication date: September 10, 2020Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Stéphane Grabli, Michael Bao, Per Karefelt, Adam Ferrall-Nunge, Jeffery Yost, Ronald Fedkiw, Cary Phillips, Pablo Helman, Leandro Estebecorena
-
Publication number: 20200286301Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; andType: ApplicationFiled: March 3, 2020Publication date: September 10, 2020Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Matthew Loper, Stéphane Grabli, Kiran Bhat
-
Patent number: 10762599Abstract: A method is described that includes receiving, from a first device, input used to select a first object in a computer-generated environment. The first device has at least two degrees of freedom with which to control the selection of the first object. The method also includes removing, in response to the selection of the first object, at least two degrees of freedom previously available to a second device used to manipulating a second object in the computer-generated environment. The removed degrees of freedom correspond to the at least two degrees of freedom of the first device and specify an orientation of the second object relative to the selected first object. Additionally, the method includes receiving, from the second device, input including movements within the reduced degrees of freedom used to manipulate a position of the second object while maintaining the specified orientation relative to the selected first object.Type: GrantFiled: February 28, 2014Date of Patent: September 1, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventor: Steve Sullivan
-
Publication number: 20200265638Abstract: Implementations of the disclosure are directed to generating shadows in the physical world that correspond to virtual objects displayed on MR displays. In some implementations, a method includes: synchronously presenting a version of a scene on each of a MR display system and a projector display system, where during presentation: the MR display system displays a virtual object overlaid over a view of a physical environment; and a projector of the projector display system creates a shadow on a surface in the physical environment, the created shadow corresponding to the virtual object displayed by the MR display. In some implementations, the method includes: loading in a memory of the MR display system, a first version of the scene including the virtual object; and loading in a memory of the projector display system a second version of the scene including a virtual surface onto which the virtual object casts a shadow.Type: ApplicationFiled: February 20, 2019Publication date: August 20, 2020Applicant: Lucasfilm Entertainment Company Ltd. LLCInventors: Michael Koperwas, Lutz Latta
-
Publication number: 20200249765Abstract: A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated with another user to a new location within the VR environment. In some examples, the user may be enabled to use the computing device to control a virtual camera within the VR environment and have various information regarding one or more aspects of the virtual camera displayed in the view of the VR environment presented to the user.Type: ApplicationFiled: April 17, 2020Publication date: August 6, 2020Applicant: Lucasfilm Entertainment Company Ltd.Inventors: Darby Johnston, Ian Wakelin
-
Patent number: 10732797Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.Type: GrantFiled: October 11, 2017Date of Patent: August 4, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
-
Patent number: 10701253Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.Type: GrantFiled: August 13, 2018Date of Patent: June 30, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
-
Patent number: 10692288Abstract: A method may include capturing a first image of a physical environment using a mobile device. The mobile device may include a physical camera and a display. The method may also include receiving a second image from a content provider system. The second image may be generated by the content provider system by rendering a view from a virtual camera in a virtual environment. The virtual environment may represent at least a portion of the physical environment. A location of the virtual camera in the virtual environment may correspond to a location of the physical camera in the physical environment. The second image may include a view of a computer-generated object. The method may additionally include generating a third image by compositing the first image and the second image, and causing the third image to be displayed on the display of the mobile device.Type: GrantFiled: June 27, 2017Date of Patent: June 23, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Nicholas Rasmussen, Michael Koperwas, Earle M. Alexander, IV
-
Publication number: 20200145644Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: ApplicationFiled: November 6, 2019Publication date: May 7, 2020Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger CORDES, Nicholas RASMUSSEN, Kevin WOOLEY, Rachel ROSE
-
Publication number: 20200143592Abstract: An immersive content presentation system and techniques that can detect and correct lighting artifacts caused by movements of one or more taking camera in a performance area consisting of multiple displays (e.g., LED or LCD displays). The techniques include capturing, with a camera, a plurality of images of a performer performing in a performance area at least partially surrounded by one or more displays presenting images of a virtual environment. Where the images of the virtual environment within a frustum of the camera are updated on the one or more displays based on movement of the camera, and images of the virtual environment outside of the frustum of the camera are not updated based on movement of the camera. The techniques further include generating content based on the plurality of captured images.Type: ApplicationFiled: November 6, 2019Publication date: May 7, 2020Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD. LLCInventors: Roger CORDES, Richard BLUFF, Lutz LATTA
-
Patent number: 10627908Abstract: A system and method for controlling a view of a virtual reality (VR) environment via a computing device with a touch sensitive surface are disclosed. In some examples, a user may be enabled to augment the view of the VR environment by providing finger gestures to the touch sensitive surface. In one example, the user is enabled to call up a menu in the view of the VR environment. In one example, the user is enabled to switch the view of the VR environment displayed on a device associated with another user to a new location within the VR environment. In some examples, the user may be enabled to use the computing device to control a virtual camera within the VR environment and have various information regarding one or more aspects of the virtual camera displayed in the view of the VR environment presented to the user.Type: GrantFiled: September 30, 2015Date of Patent: April 21, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Darby Johnston, Ian Wakelin
-
Patent number: 10600245Abstract: Systems and techniques are provided for switching between different modes of a media content item. A media content item may include a movie that has different modes, such as a cinematic mode and an interactive mode. For example, a movie may be presented in a cinematic mode that does not allow certain user interactions with the movie. The movie may be switched to an interactive mode during any point of the movie, allowing a viewer to interact with various aspects of the movie. The movie may be displayed using different formats and resolutions depending on which mode the movie is being presented.Type: GrantFiled: May 28, 2015Date of Patent: March 24, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Lutz Markus Latta, Ian Wakelin, Darby Johnston, Andrew Grant, John Gaeta
-
Patent number: 10602200Abstract: Systems and techniques are provided for switching between different modes of a media content item. A media content item may include a movie that has different modes, such as a cinematic mode and an interactive mode. For example, a movie may be presented in a cinematic mode that does not allow certain user interactions with the movie. The movie may be switched to an interactive mode during any point of the movie, allowing a viewer to interact with various aspects of the movie. The movie may be displayed using different formats and resolutions depending on which mode the movie is being presented.Type: GrantFiled: March 31, 2016Date of Patent: March 24, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Andrew Grant, Lutz Markus Latta, Ian Wakelin, Darby Johnston, John Gaeta
-
Patent number: 10594786Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.Type: GrantFiled: October 11, 2017Date of Patent: March 17, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
-
Patent number: 10553036Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.Type: GrantFiled: October 11, 2017Date of Patent: February 4, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
-
Patent number: 10489958Abstract: A system includes a computing device that includes a memory configured to store instructions. The system also includes a processor configured to execute the instructions to perform a method that includes receiving multiple representations of an object. Each of the representations includes position information of the object and corresponds to an instance in time. For at least one of the representations, the method includes defining a contour that represents a movable silhouette of a surface feature of the object. The method also includes producing a deformable model of the surface of the object from the defined contour and from the at least one representation of the object.Type: GrantFiled: August 1, 2017Date of Patent: November 26, 2019Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Ronald Mallet, Yuting Ye, Michael Koperwas, Adrian R. Goldenthal, Kiran S. Bhat
-
Patent number: 10484824Abstract: A method may include causing first content to be displayed on a display device. The method may also include determining a location of a mobile device relative to the display device. In some embodiments, the mobile device may be positioned such that the first content is visible to a viewer of the mobile device. The method may additionally include causing second content to be displayed on the mobile device such that the second content is layered over the first content.Type: GrantFiled: June 27, 2016Date of Patent: November 19, 2019Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: John Gaeta, Michael Koperwas, Nicholas Rasmussen
-
Patent number: 10423234Abstract: A system and method facilitating a user to manipulate a virtual reality (VR) environment are disclosed. The user may provide an input via a touch sensitive surface of a computing device associated with the user to bind a virtual object in the VR environment to the computing device. The user may then move and/or rotate the computing device to cause the bound virtual object to move and/or rotate in the VR environment accordingly. In some examples, the bound virtual object may cast a ray into the VR environment. The movement and/or rotation of the virtual object controlled by the computing device in those examples can change the direction of the ray. In some examples, the virtual object may include a virtual camera. In those examples, the user may move and/or rotate the virtual camera in the VR environment by moving and/or rotate the computing device.Type: GrantFiled: September 30, 2015Date of Patent: September 24, 2019Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Darby Johnston, Ian Wakelin
-
Patent number: 10403019Abstract: A multi-channel tracking pattern is provided along with techniques and systems for performing motion capture using the multi-channel tracking pattern. The multi-channel tracking pattern includes a plurality of shapes having different colors on different portions of the pattern. The portions with the unique shapes and colors allow a motion capture system to track motion of an object bearing the pattern across a plurality of video frames.Type: GrantFiled: February 11, 2016Date of Patent: September 3, 2019Assignee: LUCASFILM ENTERTAINMENT COMPANYInventor: John Levin
-
Patent number: 10373342Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.Type: GrantFiled: October 11, 2017Date of Patent: August 6, 2019Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
-
Patent number: 10321117Abstract: A method of generating unrecorded camera views may include receiving a plurality of 2-D video sequences of a subject in a real 3-D space, where each 2-D video sequence may depict the subject from a different perspective. The method may also include generating a 3-D representation of the subject in a virtual 3-D space, where a geometry and texture of the 3-D representation may be generated based on the 2D video sequences, and the motion of the 3-D representation in the virtual 3-D space is based on motion of the subject in the real 3-D space. The method may additionally include generating a 2-D video sequence of the motion of the 3D representation using a virtual camera in the virtual 3-D space where the perspective of the virtual camera may be different than the perspectives of the plurality of 2-D video sequences.Type: GrantFiled: August 25, 2014Date of Patent: June 11, 2019Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Hilmar Koch, Ronald Mallet, Kim Libreri, Paige Warner, Mike Sanders, John Gaeta
-
Publication number: 20190124244Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.Type: ApplicationFiled: August 13, 2018Publication date: April 25, 2019Applicant: Lucasfilm Entertainment Company Ltd.Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
-
Publication number: 20190122374Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.Type: ApplicationFiled: August 13, 2018Publication date: April 25, 2019Applicant: Lucasfilm Entertainment Company Ltd.Inventors: Leandro Estebecorena, John Knoll, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
-
Patent number: 10269165Abstract: A system includes a computing device that includes a memory and a processor configured to execute instructions to perform a method that includes receiving multiple representations of one or more expressions of an object. Each representation includes position information attained from one or more images of the object. The method also includes producing an animation model from one or more groups of controls that respectively define each of the one or more expressions of the object as provided by the multiple representations. Each control of each group of controls has an adjustable value that defines the geometry of at least one shape of a portion of the respective expression of the object. Producing the animation model includes producing one or more corrective shapes if the animation model is incapable of accurately presenting the one or more expressions of the object as provided by the multiple representations.Type: GrantFiled: January 30, 2012Date of Patent: April 23, 2019Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Kiran S. Bhat, Michael Koperwas, Rachel M. Rose, Jung-Seung Hong, Frederic P. Pighin, Christopher David Twigg, Cary Phillips, Steve Sullivan
-
Patent number: 10269169Abstract: In one general aspect, a method is described. The method includes generating a positional relationship between one or more support structures having at least one motion capture mark and at least one virtual structure corresponding to geometry of an object to be tracked and positioning the support structures on the object to be tracked. The support structures has sufficient rigidity that, if there are multiple marks, the marks on each support structure maintain substantially fixed distances from each other in response to movement by the object. The method also includes determining an effective quantity of ray traces between one or more camera views and one or more marks on the support structures, and estimating an orientation of the virtual structure by aligning the determined effective quantity of ray traces with a known configuration of marks on the support structures.Type: GrantFiled: August 9, 2016Date of Patent: April 23, 2019Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Steve Sullivan, Colin Davidson, Michael Sanders, Kevin Wooley
-
Patent number: 10147219Abstract: Performance capture systems and techniques are provided for capturing a performance of a subject and reproducing an animated performance that tracks the subject's performance. For example, systems and techniques are provided for determining control values for controlling an animation model to define features of a computer-generated representation of a subject based on the performance. A method may include obtaining input data corresponding to a pose performed by the subject, the input data including position information defining positions on a face of the subject. The method may further include obtaining an animation model for the subject that includes adjustable controls that control the animation model to define facial features of the computer-generated representation of the face, and matching one or more of the positions on the face with one or more corresponding positions on the animation model.Type: GrantFiled: February 3, 2017Date of Patent: December 4, 2018Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Kiran Bhat, Michael Koperwas, Jeffery Yost, Ji Hun Yu, Sheila Santos
-
Patent number: D910738Type: GrantFiled: April 2, 2019Date of Patent: February 16, 2021Assignee: Lucasfilm Entertainment Company Ltd. LLCInventors: John M. Levin, Leandro F. Estebecorena, Paige Warner