RDL Major Project 17 [RMP17]

ADDITION TO RPM17: 24TH FEB 2013

RDL shall expand coverage of RMP17 users to enable them to access and use Apps they have programmed &/or have access to via the Internet (as described in Google AR). In addition, developments of Apps shall also be intended & designed to help users with degrees of limitation of eyesight. including but not limited to [ibnlt] substituting the camera (included in glasses worn by users) to supplement limitation of eyesight: to interpret the camera's view, ibnlt to microscopes & telescopes plus adapted to low & bright light,  & convert the information to be read; understood via the user's ears. For example: camera's & App's reading words & sentences, shall be read out to be heard by the user's ears. (The information visually read by the camera needs also to carry the direction(s), size(s), color(s) & other cues of meaning: to sound, which the user can learn & use effectively. The RPM17 user may also be able to choose how the information supplied for the user to understand and use it, efficiently, under the circumstances (such as to avoid danger, learn & practice his/her role.) 

Longer term, these additions shall be planned to help humans [HSS] collaborate with Experimental Robots [HR]  in view of changes in limitations of human capability caused by age &/or injury & differences in HR of different types.   

4D Vision, Surround Sound & Augmented Reality 

This added Metaio’s augmented reality software capability, is useful to non-developers, in connection with RDL's Major Project 16 and this Major Project 17:.

"If you are not familiar with augmented reality (AR), the technology superimposes images and information over your view of the world. For example, if you wanted to learn more about a landmark in front of you, an AR application might allow you to look through your phone’s camera and see information about its history, dimensions, or geographic coordinates. Kind of like creating an MTV pop-up video for the real world."

Summary of Upgrade to RDL Major Project 17: This project may wait for Major Project 12 through 16 to  be available in advance, or simply include all of them as part of this, and the following additions and upgrades:

  • All before this shall begin with (Experimental) RDL Major  Project 17, which includes design, patenting and perfecting of wireless equipment to equip people (homo sapiens sapiens [hss]) and some other mammals to:
  • act, dance, compose for and play musical instruments (also specially designed by RDL)
  • dictate and/or recite
  • dialogues in specific languages, dialects and children's, teenage and adult men and women's voices, animal sounds independently (for practice & perfection):

All of which can be recorded, remixed and/or transmitted locally or via the Internet & then used to intercommunicate &/or used to modify remixed & recombined for entertainment, learning or teaching or collaborating among consenting people participating in real, enacted or re-enacted (including but not limited to [ibnlt]):

  • work
  • activities
  • play
  • concerts
  • operas
  • dances,
  • church meetings
  • vacation expeditions
  • shopping expeditions &/or
  • training for/&
  • combat (as a few, ibnlt, zero, VS many).

 

The required Augmented equipment will include:     
  • (wireless) glasses with
    1. AR cameras
    2. microphones and
    3. intercommunication senders & 
    4. receivers listening for sound & user’s, nearby people’s, animal’s vocal sounds which can be controlled by the user (wearer) to intercommunicate with other people and animals equipped with the same type of AR glasses &/or be transmitted via Internet for recording & processing to record visual scenes, actions, dictation and translation of voice(s) to dialogue for printing, or script writing, by AR users, ibnlt interconnected people or all included.  
  • A wireless video camera
    1. Including a cell phone carried  (concealed or not) by the user with
    2. wireless-connections to an invisible ear pad in the users ear, impossible for others to hear
    3. microphones
    4. intercommunication senders & 
    5. receivers which can be used to record the surroundings, voice comments on,   written description of the scene and/or events (with the user as the narrator or the star, or supporting star{s}) of that scene by user’s voice, standingby-er’s voice{s} {or ASAP}) by thought, and re-playable to be modified by carrier’s voice or an editor’s voice or thought or absolutely impossible to edit, by anybody, ever (for eye-witnessed evidence, better than human memory, for court or true faction rather than {“true” story} Fiction)
    • This  (above) wireless-connection unit is also the controller used to control all interconnect-able equipment (within range or accessible via the Internet) including, but not limited to,

1.       computers

2.       telephones

3.       dictation machines

4.       vehicles

5.       television sets

6.       radios and

7.       weapons, all

8.       assigned to and to

9.       exclusive control of the designated user by user’s voice and/or by user’s thought

10.    See Experimental HAL.

____________________________________________________________________
                     Previous specification of RDL Major Project 17

Design, develop & build  III of voice-controlled glasses, earphones & support, to be worn by individuals, especially those with (or without) disabilities, & adapted to manage quick reorientation of user's attention:
  • Version III in or near any (selected) GSDS equipped room (or completely independent of Project 13) & with, in any event: augmented reality [AR],  combining user's (selected by user, from among the selectable) view(s) with user's viewable  real environment.
In the case of Version I and II, the glasses display a wrap-around view, as though the user was in a room: with a large window wall and three walls, with one framed mirror and several framed pictures. The user's "apparent" room is as if three walls were the same as the one GSDS room he has selected to view (and/or interact with), and the fourth wall is a window, divided into on several other GSDS rooms he has selected to view (and/or converse with occupants of the GSDS room).
 
In Version III, the effect is similar, but provide users with additional choices:
  1. three walls represent the users' real environments, and the other GSDS  rooms, one at a time, on the fourth wall, or
  2. all selected GSDS rooms displayed but arranged on the fourth wall, or
  3. the first wall displays (a window on) the real environment, and the other three walls display views of up to three selected GSDS rooms. 

 

In Project 16, Version I, and II, the 4D vision & surround sound are added to Project 13.

 

In Project 17, Version III, there are three layers:

  • the equivalent of  Project 13 rooms
  • augmented by "4D" vision, surround sound &
  • augmented reality added.

That is (in Version III) the AR glasses include two cameras, one associated with each side of the glasses, providing 3D vision of the user's real environment, and the temples of the glasses each have a microphone which senses the sound, with direction as well as frequency pattern and association with the focus of interest in the scene in view. The user can pre-program the blending of the real environment with the Project 13 information their system is tuned to. The tuning can be into independent broadcasts, friends and associates, mutually-agreed (& equipped)  to share (one way or both) either in real time or timed (that is the "4" of "4D") or local informants (ie services offered or provided) within the local real environments.

For just one example: with Version III: A user can be in a shopping center. choosing to converse with their child in their home at home, reading & being coached to understand Royal Reflections (the child is learning to read), superimposed on the real-environment scene, in the shopping center. Taking note of a camera in a window, the user speaks to the child, asking it to wait (interpreted as "Control Speak Wait"), starts to converse with a demonstrator of cameras (by speaking "Control Camera Demonstrator", which matched words read from a poster in the same window, in the nearby shop, also equipped with Version III). 

                           USER (INFLECTION INDICATING QUESTION)
"May I see the Canon 500?",
                           DEMONSTRATOR (PICKING UP AND SHOWING A VIDEO)     
"Certainly. Here is a video of operating it, or do you want to ask for something else."
                           USER
"Control Video Record... please."

The Version III glasses records the video, while the user says "Control Listen Khai". Then, the child, Khai, appears in the glasses and says (via the glass's temples), "Daddy. Did grandma write this book? 

Daddy says, "Yes, Son, to help you learn to read", 

Khai, "But, Daddy. I love to read. May I just read it all the way through?  

Daddy says, "Sure. Now, I have to see something. Bye Bye." "Bye Bye" closed the GSDS scene on Daddy's, Version III, AR glasses. 

"Control Play Video" starts the recorded video while he sits in his car, after he voice controlled the glasses (with the words "Control Turn Off") to turn everything else off, unless urgently called by a (selected by the user to be allowed to call urgently) GSDS or Version III user.

*Note: The intelligence programed into the AR glasses includes listening, especially the user's conversations taking place, interpretation of the context & meaning of their  speech. As its understanding of the user's voice and his (next) need(s), the degree of the user needing to speak specific words and phrases, to be understood as orders, are reduced. In small window "on screen", indicating the glasses' understanding of the situation, leading to an order. The next step, a command, is displayed. Various ways of indicating agreement: "Go!" spoken, typed or a key pressed on a control pad, or on the AR glasses are being explored. And may be further explored, tested & adopted by specific III users.

In addition, there is RDL's R&D, to enable the AR glasses to sense the directions and curvature(s) of both user's eyes, to detect the location & distance of the user's focus of interest. Combined with a sensor to detect the (4D) orientation of the user's head, the interpretation of what he says (&/or keys or otherwise indicates) can be put together to signal the user's focus of interest, whom he is addressing & what he is talking about, especially if it is related to something in his view.  Furthermore, this understanding of the specific, combined, behaviors of the users, can be intercommunicated among other III user's glasses to sense the specific and overall situation, including (but not limited to) the likely evolution of the conversation. This can be used to prepare for alternative next steps, re-judgement, readjustment; quick & efficient communications to allow collaboration, or "wipe the slate" to speed up dealing with novelty. Records for professional evaluation of typical situations, can be studied & made useful for III users to work efficiently & enjoy their III AR facilities.

A novel form of entertainment & shared experience(s), can include III user's sharing each other's views with all other III user's views of their real environment. For example, a sports match, opera or air show. Individuals may shift focus of interest from one real environment (viewed by anyone of the  III users, & alter the (necessarily limited number of mutual sharers of rooms which can be viewed on one pair of III glasses) & they wish to share a real environment with them. The various comments and instruction can be selected by each III user. 

One of RDL's ambitions is organize (&/or re-organize) an opera, with an orchestra (with Conductor) cast of singers, dancers etc. The "Producer" carries  III AR glasses in a pleasant location (possibly an outdoor park). All viewers must also use III to view the location & select the other sharers, which may include the "Conductor", players, & cast assembled or in independent GSDS rooms. Such rooms may be I or II to appear, but III to view the whole Opera. Any mix of individuals can be "live" or "recorded" except the Producer &. most likely, the Conductor. (Or, of course, the whole Opera could be recorded & played, &/or all parts recorded from time to time, or piecemeal,  & reorganized by the Producer &/or Conductor for playing back, or some parts from recordings &/or combined with new real-time performance(s).

The idea of karaoke, is planned, so that Versions I and II of GSDS and version III can use karaoke, to  superimpose their own performance of a part in an Opera. Which they can view as they perform and record themselves (for display, as an integrated version): of their vision & sounds heard.

Version III users can adopt aspects of Opera, enabled for them to "cast" themselves as Producer and/or Conductor, and or selecting themselves, or other recorded and/or live performances, as parts of new versions of one of the existing versions of the original Opera. The possibility of merging other independent Operas, to provide a "mixed" blend of more than one Opera, which is under investigation.

Especially, a fairly simple case is: changing real-time or recorded  "real environment" background(s) to be replaced by new real-time or recorded "real environment", to an existing (and/or blends) of live "real environment", of an Opera. This results with a new mix of old versions, or mixing an old Opera with another new one, recorded, or mixed with original parts being performed live and viewed and/or recorded as parts and reintegrated as a complete new Opera or Opera version.

Finally, as of now, any episode of participation of GSDS and III users can be handled (as described above) in the same manner as Opera, w/or/w/o music. Any III user, self (or otherwise) appointed as Producer, or Conductor, can select (planned &/or recorded) episodes, with (paid for &/or provided permission) by participants, to arrange & combine them is an appropriate manner, to provide, either to GSDS, III users &/or other independent users, such as: 

  1. interesting "News"
  2. entertaining "Plays" 
  3. instructive "Classes" &/or
  4. if music is significantly involved, called "Opera"
  5. similar to Motion Pictures
  6. Videos etc etc.
  7. as live, recorded or mixed real-time & recorded scenes.

In summary: the design, construction & supply of education, entertainment, teaching  & learning: Can be derived from the participation of &/or use of knowledge &/or skill(s) of GSDS &/or III users.

These scenes can to be enjoyed and benefited from by such users &, in some ways, by independent users with access to these scenes, via other media.

Please note also, that some Plays, Classes and certain Opera (Ballet) are (or will be) derived from Shirl's (and G. Haven d'Amaury III's) books, & in particular, will be also be distributed intergalactically to teach children to love reading and love learning to read, as well, plus learning & teaching foreign (even intergalactic) languages.