Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGING APPARATUS
Document Type and Number:
WIPO Patent Application WO/2019/111227
Kind Code:
A1
Abstract:
A dual-mode stereoscopic/panoramic imaging apparatus includes a stitching module for creating a 360° panoramic image and/or a stereo image synthesizing module for generating a 3D stereoscopic image. Either module can compute updated image- alignment calibration data for each of the two cameras by acquiring calibration-target images of an onboard calibration target, and calculating updated rotation and translation data, and can use the updated image-alignment calibration data to stitch a panoramic image or synthesize a 3D stereoscopic image. Asymmetric cropping for each of the two acquired images can exclude imaging of respective other cameras and camera platforms. Techniques for producing a fully stereoscopic 360 degree panoramic image from only four cameras in a L1-R1-L2-R2 configuration are also described herein.

Inventors:
MAROM TOMER (IL)
YAHAV YANIV (IL)
KOTTEL ILYA (IL)
BARAK AMIT (IL)
KRASAVETS ARYE (IL)
Application Number:
PCT/IB2018/059771
Publication Date:
June 13, 2019
Filing Date:
December 07, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUMANEYES TECH LTD (IL)
International Classes:
H04N13/243; G03B17/56; H04N13/232
Foreign References:
US20170078653A12017-03-16
US20160088280A12016-03-24
US20170019595A12017-01-19
Other References:
HO TUAN ET AL: "Dual-fisheye lens stitching for 360-degree imaging", 2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE, 5 March 2017 (2017-03-05), pages 2172 - 2176, XP033258801, DOI: 10.1109/ICASSP.2017.7952541
Attorney, Agent or Firm:
VAN DYKE, Marc (IL)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A imaging apparatus comprising:

a. first (Ll) 790A and second (Rl) 790B wide-angle cameras laterally displaced from each other by a lateral displacement distance lat_dist0ffset along a first lateral displacement axis 714A, the first (Ll) 790A and second (Rl) 790 wide-angle cameras being configured to respectively acquire first ( Ll_img ) and second (Rl_img) images of a scene, optical axes of the first (Ll) 790A and second (Rl) 790B wide- angle cameras being parallel to each other and normal to the first lateral displacement axis 714A;

b. third (L2) 790C and fourth (R2) 790D wide-angle cameras laterally displaced from each other by the lateral displacement distance lat_dist0ffSet along a second lateral displacement axis 714B parallel to the first lateral displacement axis 714A, the third (L2) 790C and fourth (R2) 790D wide-angle cameras configured to respectively acquire third( 2_/mg) and fourth ( R2_img ) images of the scene, optical axes of the third (L2) 790C and fourth (R2) 790D wide-angle cameras being parallel to each other and normal to the second lateral displacement axis 714B,a wherein:

i. the first (Ll) 790A and fourth (R2) 790D wide-angle cameras are laterally aligned with each other, the first (Ll) 790A and fourth (R2) 790D wide-angle cameras facing away from each other;

ii. the second (Rl) 790B and third (L2) 790C wide-angle cameras are laterally aligned with each other, the second (Rl) 790B and third (L2) 790C wide-angle cameras facing away from each other; and

iii. each 790A, 790B, 790C, 790D of the wide-angle cameras has a common angle of coverage 180° + 2b ;

iv. a value of b exceeds 0° and is at most 30°;

c. an image-processing module for forming left-eye and right-eye 360° stitched panoramic images as follows:

i. an angle a has a value 0< a <2*b; ii. over the range between -180° and 180° degrees, the left-eye stitched 360° panoramic image is an ordered stitching

L2_img / R2_img / Ll_img / Rl_img / L2_img where:

A. a stitch line between L2_img and R2_img is at located at -90°- a;

B. a stitch line between R2_img and Ll_img is at located at -90°+ a;

C. a stitch line between Ll_img and Rl_img is at located at 90° - a; and

D. a stitch line between Rl_img and L2_img is at located at 90°+ a; and

iii. over the range between -180° and 180° degrees, the right-eye stitched 360° panoramic image is an ordered stitching

R2_img / Ll_img / Rl_img / L2_img / R2_img where:

A. a stitch line between R2_img and Ll_img is at located at -90°- a;

B. a stitch line between Ll_img and Rl_img is at located at -90°+ a;

C. a stitch line between Rl_img and L2_img is at located at 90° - a; and

D. a stitch line between L2_img and R2_img is at located at 90°+ a.

2. The imaging apparatus of claim 1 wherein:

I. positions and orientations of the first (Ll) 790A, second (Rl) 790B, third (L2) 790C, and fourth (R2) 790B cameras (A) an intermediating region-of-space 300 between the first and second cameras and bounded by optical axes of the first 790A and second 790B cameras, the intermediating region-of-space 300 also being disposed between the third 790C and fourth 790D cameras and bounded by optical axes of the third 790C and fourth 790C cameras; and (B) first 301A and second 301B exterior regions-of-space respectively bounded by the optical axes of the first and second cameras, wherein the first 301A exterior region-of-space is local with respect to both the first 790A and fourth 790D cameras and remote with respect to both the second 790B and third 790C cameras, and the second 301B exterior region-of-space is local with respect to both the second 790B and third 790C cameras and remote with respect to both the first 790A and fourth 790D cameras; II. the first ( Ll_img ), second (Rl_img), third (L2_img) and fourth ( R2_img ) images are all asymmetrically cropped to all have a common post-cropping effective angle- of-view pc_aov so that for each of the first ( Ll_img ), second ( Rl_img ),

third( 2_/mg) and fourth (R2_img) images,

i. the common post-cropping effective angle-of-view pc_aov optionally exceeds 180°;

ii. an extent of cropping from a respective remote exterior region-of-space exceeds an extent of cropping from a respective local exterior region-of- space; and

iii. an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space.

3. The imaging apparatus of any preceding claims wherein the positions and orientations of all four cameras 790A-790D are all held rigid relative to each other.

4. The imaging apparatus of any one of claims 1-3 wherein (i) the first (Ll) 790A and second (Rl) 790B wide-angle reside on a first common module, (ii) the third (L2) 790C and fourth (R2) 790D reside on a second common module; and (iii) the first and second common modules are detachably attached to each other.

5. The imaging apparatus of any one of claims 1-3 wherein (i) the first (Ll) 790A and fourth (R2) 790D wide-angle reside on a first common module, (ii) the second (Rl) 790C and third (L2) 790C reside on a second common module; and (iii) the first and second common modules are detachably attached to each other.

6. The imaging apparatus of any preceding claim wherein the angle a has a value 0< a <b.

7. The imaging apparatus of any preceding claim wherein the angle a has a value b /4< a <b or a value b /2< a.

8. The imaging apparatus of any preceding claim wherein collectively, the left and right- eye stitched 360° collectively provide a 360° fully-stereo panoramic image of the scene

9. An imaging apparatus comprising:

a. first and second ( ,/i) dual-camera modules, each comprising two >180° angle-of-view (AOV) cameras having a common AOV 180°+2b (0°< b<30°), each dual camera module having a cameras-face-the-same-direction configuration such that the >180° AOV cameras are laterally spaced from each other by a distance L and face in the same direction; and

b. an inter-module docking assembly for rigidly holding the two (L,/?) dual camera modules to each other to form a fixed L1-R1-L2-R2 four-camera configuration from two cameras of each dual-camera module; and

c. image processing logic configured to stitch images of a scene acquired by each camera when in the fixed L1-R1-L2-R2 four-camera configuration to obtain a 360° fully-stereo panoramic image of the scene.

10. An imaging apparatus comprising:

a. first and second foldable (L,/?) dual-camera modules, each comprising two >180° angle-of-view (AOV) cameras having a common AOV 180°+2b (0°< b<30°), each dual camera module having folded and unfolded configurations such that:

i. when in the unfolded configuration, the >180° AOV cameras are laterally spaced from each other by a distance L and face in the same direction; and ii. when in the folded configuration, the >180° AOV cameras are laterally aligned with each other and face in opposite directions;

b. an inter-module docking assembly for rigidly holding the two foldable (L,/?) dual-camera modules to each other to form a fixed L1-R1-L2-R2 four-camera configuration from two cameras of each dual-camera module; and

c. image processing logic configured to stitch images of a scene acquired by each camera when in the fixed L1-R1-L2-R2 four-camera configuration to obtain a 360° fully-stereo panoramic image of the scene.

11. The imaging apparatus of any preceding claim wherein the value of b is at most 20°.

12. The imaging apparatus of any preceding claim wherein lat_dist0ffset has a value of at least 3 cm and at most 9 cm, corresponding to an interpupillary distance.

13. The imaging apparatus of any preceding claim wherein lat_dist0ffSet has a value of at least 4.5 cm and at most 8 cm, corresponding to an interpupillary distance.

14. The imaging apparatus of any preceding claim: (i) the first (Ll) 790A and fourth (R2) 790D wide-angle cameras are offset from each other in a normal direction by a second offset distance dist2 offset, the normal direction being perpendicular to both lateral displacement axes 714A, 714B; and (ii) the second (Rl) 790B and third (L2) 790C wide- angle cameras are offset from each other in the normal direction by the second offset distance disl2 offset-

15. The imaging apparatus of claim 14 wherein the second offset distance disf offset has a value of at least 3 cm and at most 9 cm, corresponding to an interpupillary distance.

16. The imaging apparatus of claim 14 wherein the second offset distance disf offset has a value of at least 4.5 cm and at most 8 cm, corresponding to an interpupillary distance.

17. The imaging apparatus of any of claims 9-16 wherein at least one or more of the following conditions is true:

i. at least a portion of the inter-module docking assembly is disposed entirely on or in the first and/or second dual-camera modules;

ii. the inter-module docking assembly comprises a fastener between the first and second objects;

iii. the inter-module docketing assembly is magnetic; and

iv. the inter-module docketing assembly comprises a mechanical fastener.

18. The imaging apparatus of any preceding claim further comprising an inter-module docking sensor for detecting whether or not the dual-camera modules are docked to each other.

19. The imaging apparatus further comprising a video-display-controller operatively linked to the configuration sensor, for controlling the display, on an onboard or external display device to provide the following display modes:

i. a first display mode when the modules are undocked, and a given one of the modules is unfolded so that a ~ 180° stereoscopic image from the cameras of the given module is displayed on the display device;

ii. a second display mode when the modules are undocked, and a given one of the modules is folded so that a -360° non-stereoscopic panoramic image from the cameras of the given module is displayed on the display device; and

iii. a third display mode when the modules when the modules are docked so that the

360° fully-stereo panoramic image of the scene is displayed on the display screen.

20. The imaging apparatus of any of claims 9-19 wherein:

I. positions and orientations of the first (Ll) 790A, second (Rl) 790B, third (L2) 790C, and fourth (R2) 790B cameras (A) an intermediating region-of-space 300 between the first and second cameras and bounded by optical axes of the first 790A and second 790B cameras, the intermediating region-of-space 300 also being disposed between the third 790C and fourth 790D cameras and bounded by optical axes of the third 790C and fourth 790C cameras; and (B) first 301A and second 301B exterior regions-of-space respectively bounded by the optical axes of the first and second cameras, wherein the first 301A exterior region-of-space is local with respect to both the first 790A and fourth 790D cameras and remote with respect to both the second 790B and third 790C cameras, and the second 301B exterior region-of-space is local with respect to both the second 790B and third 790C cameras and remote with respect to both the first 790A and fourth 790D cameras;

II. the first ( Ll_img ), second (Rl_img), third (L2_img) and fourth ( R2_img ) images are all asymmetrically cropped to all have a common post-cropping effective angle- of-view pc_aov so that for each of the first ( Ll_img ), second ( Rl_img ), third(L2_/mg) and fourth (R2_img) images, i. the common post-cropping effective angle-of-view pc_aov optionally exceeds 180°;

ii. an extent of cropping from a respective remote exterior region-of-space exceeds an extent of cropping from a respective local exterior region-of- space; and

iii. an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space.

21. A dual-mode stereoscopic/panoramic imaging apparatus comprising:

a. a pivot assembly comprising:

i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and

ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane;

b. a configuration sensor for detecting whether the platforms are in the side-by-side configuration or the back-to-back configuration;

c. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly;

d. a stitching module for creating a 360° panoramic image from a pair of images acquired by the two cameras in the back-to-back configuration, wherein the creating includes:

i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and

ii. using the updated image-alignment calibration data, stitching a pair of images to form a 360° panoramic image; and e. a video-display-controller operatively linked to the configuration sensor, for controlling the display, on an onboard or external display device, of

(i) at a first time, a pair of images acquired with the pivot assembly in the back-to- back configuration and stitched by the stitching module to create a 360° panoramic image, such that the video-display-controller causes the stitched pair of images to be displayed as a 360° panoramic image; and

(ii) at a second time, a pair of images acquired with the pivot assembly in the side-by-side configuration and combined to create a 3D stereoscopic image, such that the video-display-controller causes the combined pair of images to be displayed as a 3D stereoscopic image.

22. A dual-mode stereoscopic/panoramic imaging apparatus comprising:

a. a pivot assembly comprising:

i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and

ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane;

b. a configuration sensor for detecting whether the platforms are in the side-by-side configuration or the back-to-back configuration;

c. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly;

d. a stereo image synthesizing module for generating a 3D stereoscopic image from a pair of images acquired by the two cameras in the side-to-side configuration, wherein the generating includes:

i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and ii. using the updated image-alignment calibration data, synthesizing a 3D stereoscopic image from the two images; and

e. a video-display-controller operatively linked to the configuration sensor, for controlling the display, on an onboard or external display device, of

(i) at a first time, a pair of images acquired with the pivot assembly in the back-to- back configuration and stitched to create a 360° panoramic image, such that the video-display-controller causes the stitched pair of images to be displayed as a 360° panoramic image; and

(ii) at a second time, a pair of images acquired with the pivot assembly in the side-by-side configuration and combined by the stereo image synthesizing module to create a 3D stereoscopic image, such that the video-display-controller causes the combined pair of images to be displayed as a 3D stereoscopic image.

23. A dual-mode stereoscopic/panoramic imaging apparatus comprising:

a. a pivot assembly comprising:

i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and

ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane;

b. a configuration sensor for detecting whether the platforms are in the side-by-side configuration or the back-to-back configuration;

c. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly;

d. a stitching module for creating a 360° panoramic image from a pair of images acquired by the two cameras in the back-to-back configuration, wherein the creating includes:

i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and

ii. using the updated image-alignment calibration data, stitching a pair of images to form a 360° panoramic image;

e. a stereo image synthesizing module for generating a 3D stereoscopic image from a pair of images acquired by the two cameras in the side-to-side configuration, wherein the generating includes:

i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and

ii. using the updated image-alignment calibration data, synthesizing a 3D

stereoscopic image from the two images; and

f. a video-display-controller operatively linked to the configuration sensor, for controlling the display, on an onboard or external display device, of

i. at a first time, a pair of images acquired with the pivot assembly in the back-to- back configuration and stitched by the stitching module to create a 360° panoramic image, such that the video-display-controller causes the stitched pair of images to be displayed as a 360° panoramic image; and

ii. at a second time, a pair of images acquired with the pivot assembly in the side- by-side configuration and combined by the stereo image synthesizing module to generate a 3D stereoscopic image, such that the video-display-controller causes the combined pair of images to be displayed as a 3D stereoscopic image.

24. A dual-mode stereoscopic/panoramic imaging apparatus comprising:

a. a pivot assembly comprising:

i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane;

b. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly; and

c. a stitching module for creating a 360° panoramic image from a pair of images acquired by the two cameras in the back-to-back configuration, wherein the creating includes:

i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and

ii. using the updated image-alignment calibration data, stitching a pair of images to form a 360° panoramic image.

25. A dual-mode stereoscopic/panoramic imaging apparatus comprising:

a. a pivot assembly comprising:

i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and

ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane;

b. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly; and

c. a stereo image synthesizing module for generating a 3D stereoscopic image from a pair of images acquired by the two cameras in the side-to-side configuration, wherein the generating includes:

i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and

ii. using the updated image-alignment calibration data, synthesizing a 3D stereoscopic image from the two images.

26. A dual-mode stereoscopic/panoramic imaging apparatus comprising:

a. a pivot assembly comprising:

i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and

ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane;

b. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly;

c. a stitching module for creating a 360° panoramic image from a pair of images acquired by the two cameras in the back-to-back configuration, wherein the creating includes:

i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and

ii. using the updated image-alignment calibration data, stitching a pair of images to form a 360° panoramic image; and

d. a stereo image synthesizing module for generating a 3D stereoscopic image from a pair of images acquired by the two cameras in the side-to-side configuration, wherein the generating includes:

i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and

ii. using the updated image-alignment calibration data, synthesizing a 3D stereoscopic image from the two images.

27. The imaging apparatus of any one of claims 21 to 26, further comprising an elongated handle portion, wherein the onboard calibration target is attached to the handle portion or is integral thereto.

28. The imaging apparatus of any one of claims 21 to 27, wherein an angle-of-coverage of each of the two cameras is at least 205°, or at least 215°, or at least 225°, or at least 235°, or at least 245°, or at least 255°, or at least 265°.

29. The imaging apparatus of any one of claims 21 to 28, wherein the onboard calibration target occults no more than 10%, or no more than 5%, or no more than 2% of any angle-of-coverage of either of the two cameras.

30. The imaging apparatus of any one of claims 27 to 29, wherein the handle portion includes a substantially flat portion, such that when the imaging apparatus is oriented vertically with the pivot assembly above the handle portion, the imaging apparatus can stand unattended on a flat surface.

31. The imaging apparatus of any one of claims 27 to 30, wherein the onboard calibration target is one of etched or formed on the handle portion, printed thereupon, mounted thereupon, fastened thereto, adhered thereto, or extended or extensible therefrom.

32. The imaging apparatus of any one of claims 27 to 31 , wherein (i) the onboard

calibration target includes a plurality of operative target portions, (ii) at least one of the plurality of target portions is non-contiguous with any other target portion, (iii) when the platforms are in the back-to-back configuration, a first operative portion of an onboard calibration target on a first surface region of the handle portion is viewable by the first camera and not viewable by the second camera, and a second operative portion of an onboard calibration target on a second surface region of the handle portion is viewable by the second camera and not viewable by the first camera, (iv) the computing of the updated image-alignment calibration data includes acquiring calibration-target-images of the respective viewable operative portions by each camera, (v) the calculating includes analyzing the respective acquired calibration-target images of the respective viewable operative portions, and (vi) the respective viewable operative portions both include substantially the same pattern or feature, or a mirror image thereof, for the analyzing.

33. The imaging apparatus of any one of claims 21 to 32, wherein at least one module selected from the group consisting of the stitching module and the stereo-synthesizing module is configured to analyze calibration-target images of the onboard calibration target acquired by the two cameras in the side-by-side configuration at a first time, so as to predict changes in relative rotation and/or translation data for each respective photodetector plane relative to the other for when the pivot assembly is in the back- to-back configuration at a second time.

34. The imaging apparatus of claim 33, wherein the computing of the updated image- alignment calibration data includes computing, based on the predicted changes, updated image-alignment calibration data that are modified from previously acquired image-alignment calibration data stored in a computer-readable storage medium.

35. The imaging apparatus of either one either one of claims 33 or 34, wherein the

predicted changes are based on at least one of a twist angle between the respective photodetector planes and a non-twist deviation from co-planarity of the respective photodetector planes, detected or calculated when the pivot assembly is in the side-to- side configuration.

36. The imaging apparatus of any one of claims 21 to 35, wherein at least one module selected from the group consisting of the stitching module and the stereo-synthesizing module is configured to analyze calibration-target images of the onboard calibration target acquired by the two cameras in the side-by-side configuration at a first time, and, when the pivot assembly is in the back-to-back configuration at a second time, calculate changes in relative rotation and/or translation data for each respective photodetector plane relative to the other, based on the analyzing.

37. The imaging apparatus of claim 36, wherein the computing of the updated image- alignment calibration data includes computing, based on the calculated changes, updated image-alignment calibration data that are modified from previously acquired image-alignment calibration data stored in a computer-readable storage medium.

38. The imaging apparatus of either one either one of claims 36 or 37, wherein the calculated changes are based on at least one of a twist angle between the respective photodetector planes and a non-twist deviation from co-planarity of the respective photodetector planes, detected or calculated when the pivot assembly is in the side-to- side configuration.

39. The imaging apparatus of any one of claims 34-38, wherein the computing of the updated image-alignment calibration data includes comparing a commonly- viewable feature of the onboard calibration target as it appears in each of the calibration-images acquired by the two cameras when in the side-by-side configuration, with stored information about the feature.

40. The imaging apparatus of any one of claims 21 to 39, wherein at least one module selected from the group consisting of the stitching module and the stereo-synthesizing module is configured to calculate, when the platforms are in the back-to-back configuration, extrinsic parameters for one of the two cameras based on extrinsic parameters acquired for the other of the two cameras.

41. The imaging apparatus of any one of claims 31 to 40, wherein at least one module selected from the group consisting of the stitching module and the stereo-synthesizing module is configured to perform a computing of updated image-alignment calibration data or a part of the computing, and the performing is at least one of: triggered by, subject to, or in response to, at least one condition being fulfilled.

42. The imaging apparatus of any one of claims 21 to 40, wherein at least one module selected from the group consisting of the stitching module and the stereo-synthesizing module is configured to perform a manually-initiated computing of updated image- alignment calibration data when triggered by a user input.

43. The imaging apparatus of any one of claims 21 to 42, additionally comprising an onboard or external non-transitory, computer-readable storage medium that includes previously acquired or calculated or baseline image-alignment calibration data for at least one of the two cameras for at least when the pivot assembly is in the back-to- back configuration.

44. The imaging apparatus of any one of claims 21 to 43, additionally comprising a

cropping module for reducing respective angles-of-view of images acquired by the two cameras when the pivot assembly is in the side-by-side configuration, wherein the respective post-cropping effective angle-of-view of each image is selected so as to exclude at least the respective other ultra-wide-angle camera.

45. A dual-mode stereoscopic/panoramic camera apparatus featuring asymmetric view cropping, the camera apparatus comprising:

a. first and second ultra wide-angle cameras for acquiring first and second digital images of a scene;

b. a multi-configuration housing assembly to which the first and second ultra wide-angle cameras are mounted such that:

i. when the housing assembly is in a side-by-side configuration, the first and second cameras are substantially co-oriented and laterally displaced so as to define: (A) an intermediating region-of-space between the first and second cameras and bounded by optical axes of the first and second cameras; and (B) first and second exterior regions-of-space respectively bounded by the optical axes of the first and second cameras, wherein the first exterior region-of-space is local with respect to the first camera and remote with respect to the second camera, and the second exterior region-of-space is local with respect to the second camera and remote with respect to the first camera; and

ii. when the housing assembly is in a back-to-back configuration, the first and second cameras are oriented in substantially opposite directions, c. image-processing circuitry providing a 180°+ asymmetric cropping mode for reducing respective angles-of-view of images acquired by the first and second cameras when the housing assembly is in the side-by-side configuration, so that for each of the first and second digital images:

(i) a post-cropping effective angle-of-view is equal to a, where a equals or exceeds 180°;

(ii) an extent of cropping from a respective remote exterior region-of- space exceeds an extent of cropping from a respective local exterior region-of-space; and (iii) an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space; and

d. a video-display-controller operatively linked to the image -processing circuitry, the video-display-controller for controlling the display, on an onboard or external display device, of the images acquired by the first and second cameras such that:

i. when the multi-configuration housing assembly is in the side-by- side configuration, the video-display-controller causes the first and second images to be displayed in a stereoscopic mode where each image is cropped by the image -processing circuitry according to the 180°+ asymmetric cropping mode such that a post-cropping effective angle-of-view of each of the first and second images is equal to a and

ii. when the multi-configuration housing assembly is in the back-to- back configuration, the video-display-controller causes the first and second images to be displayed in a panorama mode such that the first and second images each individually provide an angle of view greater than a and the first and second images are stitched together to collectively provide 360° of view.

46. The camera apparatus of claim 45, wherein the video-display-controller is configured so that when the multi-configuration housing assembly is in the back-to-back

configuration, each of the first and second images are displayed without cropping.

47. The camera apparatus of claim 45, wherein the video-display-controller is configured so that when the multi-configuration housing assembly is in the back-to-back

configuration, each of the first and second images are displayed after cropping to an angle of view b where b> a.

48. The camera apparatus of any one of claims 45 to 47, wherein the respective post cropping effective angle-of-view of each image acquired by the first and second cameras when the housing assembly is in a side-by-side configuration is selected so as to exclude at least the respective other camera.

49. The camera apparatus of any one of claims 45 to 48, wherein the respective post cropping effective angle-of-view of each image acquired by the first and second cameras when the housing assembly is in a side-by-side configuration is selected so as to include substantially the entire portion of the scene that comprises stereo overlap.

50. The camera apparatus of any one of claims 45 to 49, wherein the respective post cropping effective angle-of-view of each image acquired by the first and second cameras when the housing assembly is in a side-by-side configuration is selected so as to exclude at least the housing assembly.

51. The camera apparatus of any one of claims 45 to 50, additionally comprising

a. an electronic communication connection with a non-transitory computer- readable storage medium for the storage of images; and

b. software instructions, which when executed by the image -processing circuitry, causes the image-processing circuitry to store, in the non-transitory storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; a panoramic image resulting from stitching the two images together; and a stereoscopic image in a format readable by a 3D image display device.

52. A camera apparatus for acquiring two images substantially simultaneously and forming therefrom a complex image, where the form of the complex image is user- selectable from a group consisting of panoramic and stereoscopic images, the camera apparatus comprising:

a. a housing assembly comprising two pivotably joined camera platforms operable to be user-manipulated, alternatively to a substantially co-planar side-by-side configuration or to a back-to-back configuration, the choice of configuration being effective to make a user-selection between a panoramic image and a stereoscopic image;

b. two ultra- wide- angle cameras, each camera installed on one of the two camera platforms such that in the side-by-side configuration the two cameras face in the same direction, and in the back-to-back configuration the two cameras face in opposite directions; c. electronic circuitry configured to asymmetrically crop the two images responsive to the choice of configuration being the side-by-side configuration, such that (i) a post-cropping effective angle-of-view of each cropped image is equal to a, where a equals or exceeds 180° and (ii) at least the cameras and the camera platforms are excluded from the cropped images; and

d. a display device at least operable to display a complex image formed by (i) stitching the two acquired images together to form a panoramic image or (ii) cropping and combining the two acquired images together to form a stereoscopic image.

53. The camera apparatus of claim 52, wherein the electronic circuitry is additionally configured, responsive to the choice of configuration being the back-to-back

configuration, to do one of: (i) asymmetrically crop at least one of the two images; (ii) symmetrically crop at least one of the two images; and (iii) not crop either of the images.

54. A method of forming complex images from images acquired using a camera apparatus comprising two camera platforms and two ultra-wide-angle cameras respectively installed thereupon, the two camera platforms pivotably connected to each other and operable to be user-manipulated alternatively to a substantially co-planar side- by-side configuration or to a back-to-back configuration, the complex images being user- selectable from a group consisting of panoramic and stereoscopic images, the method comprising:

a. acquiring two images, substantially simultaneously, using the two cameras, the two camera platforms being user-manipulated to the side-by-side configuration for a stereoscopic image or to the back-to-back configuration for a panoramic image;

b. using electronic circuitry of the camera apparatus, cropping the two acquired images such that (i) in response to a user selection of a stereoscopic image, the cropping is applied (A) asymmetrically and (B) such that at least the cameras and the camera platforms are excluded from the cropped images and (ii) the post cropping effective angle-of-view of each of the two cropped acquired images is equal to a, where a equals or exceeds 180°; and

c. performing at least one of: i. displaying, on a display device of the camera apparatus, a complex image formed by combining the two cropped acquired images, and ii. storing, in a non-transitory computer-readable storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; and a complex image formed by combining the two cropped acquired images.

55. The method of claim 54, wherein cropping the two acquired images is also such that: (iii) in response to a user selection of a panoramic image, the cropping is applied: (A) asymmetrically for at least one of the two images; (B) symmetrically for at least one of the two images; and (C) not for either of the images.

56. A method of forming complex images from images acquired using a camera apparatus comprising two camera platforms and two ultra-wide-angle cameras respectively installed thereupon, the two camera platforms pivotably connected to each other and operable to be user-manipulated alternatively to a substantially co-planar side- by-side configuration or to a back-to-back configuration, the complex images being user- selectable from a group consisting of panoramic and stereoscopic images, the method comprising:

a. acquiring two images, substantially simultaneously, using the two cameras, the two camera platforms being user-manipulated to the side-by-side configuration for a stereoscopic image or to the back-to-back configuration for a panoramic image;

b. using electronic circuitry of the camera apparatus, selecting a cropping mode in response to the user manipulation of the camera platforms and thereupon cropping the two acquired images in accordance with the selected cropping mode, the cropping mode selected from the group comprising:

i. a 180°+ asymmetric cropping mode for asymmetrically cropping two images acquired in the side-by-side configuration, such that (A) a post cropping effective angle-of-view of each cropped image is equal to a, where a equals or exceeds 180° and (B) at least the cameras and the camera platforms are excluded from the cropped images, and ii. a panoramic image processing mode for selectively cropping two images acquired in the back-to-back configuration, such that (i) the cropping is applied: (A) asymmetrically for at least one of the two images; (B) symmetrically for at least one of the two images; and (C) not for either of the images; and (ii) a post-cropping effective angle-of-view of each cropped image equals or exceeds a and

c. performing at least one of:

i. displaying, on a display device of the camera apparatus, a complex image formed by combining the two cropped acquired images, and ii. storing, in a non-transitory computer-readable storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; and a complex image formed by combining the two cropped acquired images.

57. A dual-mode stereoscopic/panoramic camera apparatus featuring asymmetric view cropping, the camera apparatus comprising:

a. first and second ultra wide-angle cameras for acquiring first and second digital images of a scene;

b. a multi-configuration housing assembly to which the first and second ultra wide-angle cameras are mounted such that:

i. when the housing assembly is in a side-by-side configuration, the first and second cameras are substantially co-oriented and laterally displaced so as to define:

A. an intermediating region-of-space between the first and second cameras and bounded by optical axes of the first and second cameras; and

B. first and second exterior regions-of-space respectively bounded by the optical axes of the first and second cameras, wherein the first exterior region-of-space is local with respect to the first camera and remote with respect to the second camera, and the second exterior region-of-space is local with respect to the second camera and remote with respect to the first camera; and ii. when the housing assembly is in a back-to-back configuration, the first and second cameras are oriented in substantially opposite directions, c. image-processing circuitry providing a 180°+ asymmetric cropping mode for reducing respective angles-of-view of images acquired by the first and second cameras when the housing assembly is in a side-by-side configuration, so that for each of the first and second digital images:

i. a post-cropping effective angle-of-view equals or exceeds 180°; ii. an extent of cropping from a respective remote exterior region-of- space exceeds an extent of cropping from a respective local exterior region-of-space; and

iii. an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space; and

d. a video-display-controller operatively linked to the image -processing circuitry, the video-display-controller for controlling the display, on an onboard or external display device, of the images acquired by the first and second cameras such that:

i. when the multi-configuration housing assembly is in the back-to- back configuration, the video-display-controller causes the first and second images to be displayed in panorama mode such that the first and second images each individually provide at least 180° of view and the first and second images are stitched together; and

ii. when the multi-configuration housing assembly is in the side-by- side configuration, the video-display-controller causes the first and second images to be displayed in a stereoscopic mode where each image is cropped by the image -processing circuitry according to the 180°+ asymmetric cropping mode.

58. A dual-mode stereoscopic/panoramic camera apparatus featuring asymmetric view cropping, the camera apparatus comprising: a. first and second ultra wide-angle cameras for acquiring first and second digital images of a scene;

b. a multi-configuration housing assembly to which the first and second ultra wide-angle cameras are mounted such that:

i. when the housing assembly is in a side-by-side configuration, the first and second cameras are substantially co-oriented and laterally displaced so as to define: (A) an intermediating region-of-space between the first and second cameras and bounded by optical axes of the first and second cameras; and (B) first and second exterior regions-of-space respectively bounded by the optical axes of the first and second cameras, wherein the first exterior region-of-space is local with respect to the first camera and remote with respect to the second camera, and the second exterior region-of-space is local with respect to the second camera and remote with respect to the first camera; and

ii. when the housing assembly is in a back-to-back configuration, the first and second cameras are oriented in substantially opposite directions; and c. image-processing circuitry providing a 180°+ asymmetric cropping mode for reducing respective angles-of-view of images acquired by the first and second cameras when the housing assembly is in the side-by-side configuration, so that for each of the first and second digital images:

(i) a post-cropping effective angle-of-view is equal to a, where a equals or exceeds 180°;

(ii) an extent of cropping from a respective remote exterior region-of- space exceeds an extent of cropping from a respective local exterior region-of-space; and

(iii) an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space.

59. A camera apparatus for acquiring two images substantially simultaneously and forming therefrom a complex image, where the form of the complex image is user- selectable from a group consisting of panoramic and stereoscopic images, the camera apparatus comprising: a. a housing assembly comprising two pivotably joined camera platforms operable to be user-manipulated, alternatively to a substantially co-planar side-by-side configuration or to a back-to-back configuration, the choice of configuration being effective to make a user-selection between a panoramic image and a stereoscopic image;

b. two ultra- wide- angle cameras, each camera installed on one of the two camera platforms such that in the side-by-side configuration the two cameras face in the same direction, and in the back-to-back configuration the two cameras face in opposite directions; and

c. electronic circuitry configured to asymmetrically crop the two images responsive to the choice of configuration being the side-by-side configuration, such that (i) a post-cropping effective angle-of-view of each cropped image is equal to a, where a equals or exceeds 180° and (ii) at least the cameras and the camera platforms are excluded from the cropped images.

60. The camera apparatus of claim 59, additionally comprising a display device at least operable to display a complex image formed by (i) stitching the two acquired images together to form a panoramic image or (ii) cropping and combining the two acquired images together to form a stereoscopic image.

61. A method of forming complex images from images acquired using a camera apparatus comprising two camera platforms and two ultra-wide-angle cameras respectively installed thereupon, the two camera platforms pivotably connected to each other and operable to be user-manipulated alternatively to a substantially co-planar side- by-side configuration or to a back-to-back configuration, the complex images being user- selectable from a group consisting of panoramic and stereoscopic images, the method comprising:

a. acquiring two images, substantially simultaneously, using the two cameras, the two camera platforms being user-manipulated to the side-by-side configuration for a stereoscopic image or to the back-to-back configuration for a panoramic image;

b. using electronic circuitry of the camera apparatus, cropping the two acquired images such that (i) in response to a user selection of a stereoscopic image, the cropping is applied (A) asymmetrically and (B) such that at least the cameras and the camera platforms are excluded from the cropped images and (ii) the post cropping effective angle-of-view of each of the two cropped acquired images is equal to a, where a equals or exceeds 180°; and

c. performing at least one of:

i. displaying, on a display device, a complex image formed by combining the two cropped acquired images, and

ii. storing, in a non-transitory computer-readable storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; and a complex image formed by combining the two cropped acquired images.

62. A method of forming complex images from images acquired using a camera apparatus comprising two camera platforms and two ultra-wide-angle cameras respectively installed thereupon, the two camera platforms pivotably connected to each other and operable to be user-manipulated alternatively to a substantially co-planar side- by-side configuration or to a back-to-back configuration, the complex images being user- selectable from a group consisting of panoramic and stereoscopic images, the method comprising:

a. acquiring two images, substantially simultaneously, using the two cameras, the two camera platforms being user-manipulated to the side-by-side configuration for a stereoscopic image or to the back-to-back configuration for a panoramic image;

b. using electronic circuitry of the camera apparatus, selecting a cropping mode in response to the user manipulation of the camera platforms and thereupon cropping the two acquired images in accordance with the selected cropping mode, the cropping mode selected from the group comprising:

i. a 180°+ asymmetric cropping mode for asymmetrically cropping two images acquired in the side-by-side configuration, such that (A) a post cropping effective angle-of-view of each cropped image is equal to a, where a equals or exceeds 180° and (B) at least the cameras and the camera platforms are excluded from the cropped images, and ii. a panoramic image processing mode for selectively cropping two images acquired in the back-to-back configuration, such that (i) the cropping is applied: (A) asymmetrically for at least one of the two images; (B) symmetrically for at least one of the two images; and (C) not for either of the images; and (ii) a post-cropping effective angle-of-view of each cropped image equals or exceeds a and

c. performing at least one of:

i. displaying, on a display device, a complex image formed by combining the two cropped acquired images, and

ii. storing, in a non-transitory computer-readable storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; and a complex image formed by combining the two cropped acquired images.

63. A dual-mode stereoscopic/panoramic camera apparatus featuring asymmetric view cropping, the camera apparatus comprising:

a. first and second ultra wide-angle cameras for acquiring first and second digital images of a scene;

b. a multi-configuration housing assembly to which the first and second ultra wide-angle cameras are mounted such that:

i. when the housing assembly is in a side-by-side configuration, the first and second cameras are substantially co-oriented and laterally displaced so as to define:

A. an intermediating region-of-space between the first and second cameras and bounded by optical axes of the first and second cameras; and

B. first and second exterior regions-of-space respectively bounded by the optical axes of the first and second cameras, wherein the first exterior region-of-space is local with respect to the first camera and remote with respect to the second camera, and the second exterior region-of-space is local with respect to the second camera and remote with respect to the first camera; and ii. when the housing assembly is in a back-to-back configuration, the first and second cameras are oriented in substantially opposite directions; and c. image-processing circuitry providing a 180°+ asymmetric cropping mode for reducing respective angles-of-view of images acquired by the first and second cameras when the housing assembly is in a side-by-side configuration, so that for each of the first and second digital images:

i. a post-cropping effective angle-of-view equals or exceeds 180°; ii. an extent of cropping from a respective remote exterior region-of- space exceeds an extent of cropping from a respective local exterior region-of-space; and

iii. an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space.

Description:
IMAGING APPARATUS

CROSS-REFERENCE TO RELATED APPLICATIONS

This application draws priority from the following application, all of which are incorporated by reference for all purposes as if fully set forth herein: (i) U.S. Provisional Patent Application Serial No. 62596112 filed Dec 7, 2017; (ii) U.S. Provisional Patent Application Serial No. 62681667 filed July 18, 2018; (iii) U.S. Provisional Patent

Application Serial No. 62700095 filed July 18, 2018; (iv) U.S. Provisional Patent

Application Serial No. 62700571 filed July 19, 2018; (v) U.S. Provisional Patent

Application Serial No. 62700104 filed July 18, 2018; (vi) U.S. Provisional Patent Application Serial No. 62700580 filed July 19, 2018; and (vii) U.S. Provisional Patent Application Serial No. 62700588 filed July 19, 2018.

FIELD OF THE INVENTION

The present invention relates to imaging apparatuses and related methods for their use. More specifically, the invention relates to imaging apparatuses comprising ultra- wide-angle cameras, adapted for stereoscopic and panoramic photography, to calibration of such cameras, and modes of operation of such imaging apparatuses.

BACKGROUND

This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present invention, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.

There is ever-increasing consumer demand for handheld electronic devices that can perform functions that previously required larger devices and expensive software to accomplish. In the field of photography, there is interest in being able to produce more complex images beyond the ordinary digital still and video images that have become known for many years. Such complex images include panoramic images of more than 180° in breadth and even up to 360°, i.e., surround images that can be used in‘virtual reality’ viewers or even on regular displays which can allow a viewer to shift their view within the panorama; such a view-shifting viewing feature is available, for example, on the popular video website Youtube. Creating panoramic images from multiple images involves techniques such as stitching the images together and resolving any overlap between them. Another example of complex images is the stereoscopic image, which creates or enhances the illusion of depth in an image by means of stereopsis for binocular vision. This illusion of depth is what is commonly known as‘3D’ (short for three- dimensional) and is popular in various types of media. Thus, there is a need for a device that can perform these various functions including image acquisition and display of complex images formed therefrom, and also perform image processing functions and image manipulation without the need for downloading or transmitting the original acquired images.

Calibration of cameras, as is known in the art, can be used to modify how acquired images are shown, or even to modify the acquired images themselves.

Mathematically, calibration involves estimating the parameters of a pinhole camera model approximating the camera that produced a given image (“image” as used herein can mean either still photographs or video images).

Internal, or intrinsic, calibration, is often used to set or reset the calibration data of cameras, for example in a factory setting or during the life of the product after sale. This involves determining the values of intrinsic parameters that encompass focal length, image sensor format, and principal point. In some cases, the intrinsic calibration can also encompass lens distortion - which may be particularly relevant to ultra-wide-angle cameras in the present disclosure.

External calibration involves the determination of extrinsic parameters that demote the coordinate system transformations from 3D world coordinates to 3D camera coordinates. In other words, the extrinsic parameters define the position of the camera center and the camera's heading in world coordinates. Determining extrinsic parameters can be an important step in processing images to create complex images, for example stitching two (or more) images to create panoramic images, or combining pairs of image to synthesize 3D stereoscopic images. The extrinsic parameters include rotation and translation, which are used, for example, in aligning two images for stitching or stereo image synthesis.

Calibration can become important in the case of a dual-mode camera apparatus where cameras are pivoted back and forth between a stereoscopic mode in a side-by-side configuration and a panoramic mode in a back-to-back configuration. Over time the physical alignment of the two pivotable camera platforms bearing the cameras can deteriorate and calibration can become necessary. Calibration, in most cases, requires a calibration target (also known as a calibration object) that can be imaged by a camera, and this calibration-target image can be compared with information stored in a camera’s long-term memory. However, it is not always feasible to depend on the presence of an external calibration target, especially when the camera apparatus is in the hand of a user.

SUMMARY

Some embodiments of the invention are discussed below with reference to Figs. 46A-58B - for example, useful for solving a problem described with reference to Fig. 47.

In embodiments, a dual-mode stereoscopic/panoramic imaging apparatus comprises: a. a pivot assembly comprising: i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane; b. a configuration sensor for detecting whether the platforms are in the side-by-side configuration or the back-to-back configuration; c. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly; d. a stitching module for creating a 360° panoramic image from a pair of images acquired by the two cameras in the back-to-back configuration, wherein the creating includes: i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and ii. using the updated image-alignment calibration data, stitching a pair of images to form a 360° panoramic image; and e. a video-display-controller operatively linked to the

configuration sensor, for controlling the display, on an onboard or external display device, of (i) at a first time, a pair of images acquired with the pivot assembly in the back-to-back configuration and stitched by the stitching module to create a 360° panoramic image, such that the video-display-controller causes the stitched pair of images to be displayed as a 360° panoramic image; and (ii) at a second time, a pair of images acquired with the pivot assembly in the side-by-side configuration and combined to create a 3D stereoscopic image, such that the video-display-controller causes the combined pair of images to be displayed as a 3D stereoscopic image.

In embodiments, a dual-mode stereoscopic/panoramic imaging apparatus comprises: a. a pivot assembly comprising: i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane; b. a configuration sensor for detecting whether the platforms are in the side-by-side configuration or the back-to-back configuration; c. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly; d. a stereo image synthesizing module for generating a 3D stereoscopic image from a pair of images acquired by the two cameras in the side-to-side configuration, wherein the generating includes: i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective

photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and ii. using the updated image- alignment calibration data, synthesizing a 3D stereoscopic image from the two images; and e. a video-display-controller operatively linked to the configuration sensor, for controlling the display, on an onboard or external display device, of (i) at a first time, a pair of images acquired with the pivot assembly in the back-to-back configuration and stitched to create a 360° panoramic image, such that the video-display-controller causes the stitched pair of images to be displayed as a 360° panoramic image; and (ii) at a second time, a pair of images acquired with the pivot assembly in the side-by-side configuration and combined by the stereo image synthesizing module to create a 3D stereoscopic image, such that the video-display-controller causes the combined pair of images to be displayed as a 3D stereoscopic image.

In embodiments, a dual-mode stereoscopic/panoramic imaging apparatus comprises: a. a pivot assembly comprising: i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane; b. a configuration sensor for detecting whether the platforms are in the side-by-side configuration or the back-to-back configuration; c. at least one of: i. a stitching module for creating a 360° panoramic image from a pair of images acquired by the two cameras in the back-to-back configuration, wherein the creating includes: A. computing updated image-alignment calibration data for each of the two cameras by (I) acquiring, by each of the two cameras, calibration-target images of a calibration target provided on a support arrangement (tripod) supporting the imaging apparatus, and (II) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the calibration target; and B. using the updated image- alignment calibration data, stitching a pair of images to form a 360° panoramic image; and ii. a stereo image synthesizing module for generating a 180°+ 3D stereoscopic image from a pair of images acquired by the two cameras in the side-to-side configuration, wherein the generating includes: A. computing updated image-alignment calibration data for each of the two cameras by (I) acquiring, by each of the two cameras, calibration-target images of a calibration target provided on a support arrangement (tripod) supporting the imaging apparatus, and (II) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the calibration target; and B. using the updated image-alignment calibration data, synthesizing a 3D

stereoscopic image from the two images; and d. a video-display-controller operatively linked to the configuration sensor, for controlling the display, on an onboard or external display device, of (i) at a first time, a pair of images acquired with the pivot assembly in the back-to-back configuration and stitched to create a 360° panoramic image, such that the video-display-controller causes the stitched pair of images to be displayed as a 360° panoramic image; and (ii) at a second time, a pair of images acquired with the pivot assembly in the side-by-side configuration and combined to generate a 3D stereoscopic image, such that the video-display-controller causes the combined pair of images to be displayed as a 3D stereoscopic image.

In embodiments, a dual-mode stereoscopic/panoramic imaging apparatus comprises: a. a pivot assembly comprising: i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane; b. a configuration sensor for detecting whether the platforms are in the side-by-side configuration or the back-to-back configuration; c. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly; d. a stitching module for creating a 360° panoramic image from a pair of images acquired by the two cameras in the back-to-back configuration, wherein the creating includes: i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and ii. using the updated image-alignment calibration data, stitching a pair of images to form a 360° panoramic image; e. a stereo image synthesizing module for generating a 3D stereoscopic image from a pair of images acquired by the two cameras in the side-to-side

configuration, wherein the generating includes: i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and ii. using the updated image-alignment calibration data, synthesizing a 3D stereoscopic image from the two images; and f. a video-display-controller operatively linked to the configuration sensor, for controlling the display, on an onboard or external display device, of i. at a first time, a pair of images acquired with the pivot assembly in the back- to-back configuration and stitched by the stitching module to create a 360° panoramic image, such that the video-display-controller causes the stitched pair of images to be displayed as a 360° panoramic image; and ii. at a second time, a pair of images acquired with the pivot assembly in the side-by-side configuration and combined by the stereo image synthesizing module to generate a 3D stereoscopic image, such that the video- display-controller causes the combined pair of images to be displayed as a 3D stereoscopic image.

In embodiments, a dual-mode stereoscopic/panoramic imaging apparatus comprises: a. a pivot assembly comprising: i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane; b. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly; and c. a stitching module for creating a 360° panoramic image from a pair of images acquired by the two cameras in the back-to-back configuration, wherein the creating includes: i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective

photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and ii. using the updated image- alignment calibration data, stitching a pair of images to form a 360° panoramic image.

In embodiments, a dual-mode stereoscopic/panoramic imaging apparatus comprises: a. a pivot assembly comprising: i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane; b. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly; and c. a stereo image synthesizing module for generating a 3D stereoscopic image from a pair of images acquired by the two cameras in the side-to-side configuration, wherein the generating includes: i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and ii. using the updated image-alignment calibration data, synthesizing a 3D stereoscopic image from the two images.

In embodiments, a dual-mode stereoscopic/panoramic imaging apparatus comprises: a. a pivot assembly comprising: i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane; and b. at least one of: i. a stitching module for creating a 360° panoramic image from a pair of images acquired by the two cameras in the back-to-back configuration, wherein the creating includes: A. computing updated image-alignment calibration data for each of the two cameras by (I) acquiring, by each of the two cameras, calibration-target images of a calibration target provided on a support arrangement (tripod) supporting the imaging apparatus, and (II) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the calibration target; and B. using the updated image-alignment calibration data, stitching a pair of images to form a 360° panoramic image; and ii. a stereo image synthesizing module for generating a 180°+ 3D stereoscopic image from a pair of images acquired by the two cameras in the side-to-side

configuration, wherein the generating includes: A. computing updated image-alignment calibration data for each of the two cameras by (I) acquiring, by each of the two cameras, calibration-target images of a calibration target provided on a support arrangement (tripod) supporting the imaging apparatus, and (II) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the calibration target; and B. using the updated image-alignment calibration data, synthesizing a 3D stereoscopic image from the two images. In some embodiments, the imaging apparatus can additionally comprise a configuration sensor for detecting whether the platforms are in the side-by-side configuration or the back-to-back configuration. In some embodiments, the imaging apparatus can additionally comprise a video-display-controller operatively linked to the configuration sensor, for controlling the displaying, on an onboard or external display device, of i. at a first time, a pair of images acquired with the pivot assembly in the back- to-back configuration and stitched to create a 360° panoramic image, such that the video- display-controller causes the stitched pair of images to be displayed as a 360° panoramic image; and ii. at a second time, a pair of images acquired with the pivot assembly in the side-by-side configuration and combined to generate a 3D stereoscopic image, such that the video-display-controller causes the combined pair of images to be displayed as a 3D stereoscopic image.

In embodiments, a dual-mode stereoscopic/panoramic imaging apparatus comprises: a. a pivot assembly comprising: i. first and second support platforms that are pivotably joined to each other to provide side-by-side and back-to-back configurations, and ii. first and second ultra wide-angle cameras respectively installed in the first and second camera platforms, each camera comprising a planar array of photodetectors defining a respective photodetector plane; b. an onboard calibration target for geometric calibration of the cameras, the onboard calibration target attached to the pivot assembly; c. a stitching module for creating a 360° panoramic image from a pair of images acquired by the two cameras in the back-to-back configuration, wherein the creating includes: i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective

photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and ii. using the updated image- alignment calibration data, stitching a pair of images to form a 360° panoramic image; and d. a stereo image synthesizing module for generating a 3D stereoscopic image from a pair of images acquired by the two cameras in the side-to-side configuration, wherein the generating includes: i. computing updated image-alignment calibration data for each of the two cameras by (A) acquiring calibration-target images of the onboard calibration target by each of the two cameras, and (B) calculating updated rotation and translation data for each respective photodetector plane relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target; and ii. using the updated image-alignment calibration data, synthesizing a 3D stereoscopic image from the two images. In some embodiments, the imaging apparatus can additionally comprise a configuration sensor for detecting whether the platforms are in the side-by-side configuration or the back-to-back configuration. In some embodiments, the imaging apparatus can additionally comprise a video-display-controller operatively linked to the configuration sensor, for controlling the displaying, on an onboard or external display device, of i. at a first time, a pair of images acquired with the pivot assembly in the back- to-back configuration and stitched to create a 360° panoramic image, such that the video- display-controller causes the stitched pair of images to be displayed as a 360° panoramic image; and ii. at a second time, a pair of images acquired with the pivot assembly in the side-by-side configuration and combined to generate a 3D stereoscopic image, such that the video-display-controller causes the combined pair of images to be displayed as a 3D stereoscopic image. In some of the above embodiments, an imaging apparatus can additionally comprise one or more pivot members joining the first and second support platforms and defining at least one pivot axis.

In some of the above embodiments, the first and second support platforms can be indirectly pivotably joined to each other via a pivot arrangement that includes an intermediating non-pivoting portion.

In some of the above embodiments, the onboard calibration target can be directly attached to the pivot assembly.

In some of the above embodiments, the onboard calibration target can be indirectly attached to the pivot assembly. In some of the above embodiments, an imaging apparatus can further comprise an elongated handle portion, wherein the onboard calibration target is attached to the handle portion or is integral thereto. In such embodiments, it can be that (i) the handle portion is longer in its longest dimension than the pivot assembly is in any dimension, and (ii) the longest central axis of the handle portion is substantially parallel to a pivot axis. In such embodiments, at least a part of the handle portion can be viewable by both cameras when in the back-to-back configuration. In such embodiments, at least a part of the handle portion can be viewable by both cameras when in the side-by-side configuration.

In some of the above embodiments, an angle-of-coverage of each of the two cameras can be at least 205°.

In some of the above embodiments, an angle-of-coverage of each of the two cameras can be at least 215°, or at least 225°, or at least 235°, or at least 245°, or at least 255°, or at least 265°.

In some of the above embodiments, it can be that the onboard calibration target occults no more than 10%, or no more than 5%, or no more than 2% of any angle-of-coverage of either of the two cameras.

In some of the above embodiments, it can be that the onboard calibration target is only viewable by each of the two cameras in a portion of each camera’s angle-of-coverage that is outside a 180° angle bisected by each camera’s respective optical axis. In some of the above embodiments, it can be that the handle portion includes first portions proximate to the pivot assembly and second portions more distant from the pivot assembly than the first portions, each of the first portions intercepting a wider angle-of- view range with respect to the first and second cameras, than the second portions.

In some of the above embodiments, it can be that the handle portion includes a substantially flat portion, such that when the imaging apparatus is oriented vertically with the pivot assembly above the handle portion, the imaging apparatus can stand unattended on a flat surface. In some of the above embodiments, it the onboard calibration target can be one of etched or formed on the handle portion, printed thereupon, mounted thereupon, fastened thereto, adhered thereto, or extended or extensible therefrom.

In some of the above embodiments, the onboard calibration target can include a plurality of operative target portions. In such embodiments, it can be that at least one of the plurality of target portions is non-contiguous with any other target portion. In such embodiments, it can be that when the platforms are in the back-to-back configuration, a first operative portion of an onboard calibration target on a first surface region of the handle portion is viewable by the first camera and not viewable by the second camera, and a second operative portion of an onboard calibration target on a second surface region of the handle portion is viewable by the second camera and not viewable by the first camera. The handle portion can have an axis of symmetry or of reflectional symmetry across a plane passing between the two photodetector planes, and the first and second surface regions can be corresponding regions across the axis of symmetry or the axis of reflectional symmetry. It can be that (i) the computing of the updated image- alignment calibration data includes acquiring calibration-target-images of the respective viewable operative portions by each camera, and (ii) the calculating includes analyzing the respective acquired calibration-target images of the respective viewable operative portions. The respective viewable operative portions can both include substantially the same pattern or feature, or a mirror image thereof, for the analyzing.

In some above embodiments, the onboard calibration target can be characterized at least in part by a geometric pattern.

In some of the above embodiments, the onboard calibration target can be characterized at least in part by a pattern with visually contrasting portions.

In some of the above embodiments, it can be that the stitching module is configured to analyze calibration-target images of the onboard calibration target acquired by the two cameras in the side-by-side configuration at a first time, so as to predict changes in relative rotation and/or translation data for each respective photodetector plane relative to the other for when the pivot assembly is in the back-to-back configuration at a second time. In some of the above embodiments, it can be that the computing of the updated image- alignment calibration data includes computing, based on the predicted changes, updated image-alignment calibration data that are modified from previously acquired image- alignment calibration data stored in a computer-readable storage medium.

In some embodiments, it can be that the predicted changes are based on at least one of a twist angle between the respective photodetector planes and a non-twist deviation from co-planarity of the respective photodetector planes, detected or calculated when the pivot assembly is in the side-to-side configuration.

In some of the above embodiments, it can be that the stitching module is configured to analyze calibration-target images of the onboard calibration target acquired by the two cameras in the side-by-side configuration at a first time, and, when the pivot assembly is in the back-to-back configuration at a second time, calculate changes in relative rotation and/or translation data for each respective photodetector plane relative to the other, based on the analyzing. In such embodiments, computing of the updated image-alignment calibration data can include computing, based on the calculated changes, updated image- alignment calibration data that are modified from previously acquired image-alignment calibration data stored in a computer-readable storage medium. In such embodiments, the calculated changes can be based on at least one of a twist angle between the respective photodetector planes and a non-twist deviation from co-planarity of the respective photodetector planes, detected or calculated when the pivot assembly is in the side-to- side configuration. The previously acquired image- alignment calibration data can be baseline image- alignment calibration data provided with the imaging apparatus.The computing of the updated image-alignment calibration data can include comparing a commonly- viewable feature of the onboard calibration target as it appears in each of the calibration-images acquired by the two cameras when in the side-by-side configuration, with stored information about the feature.

In some of the above embodiments, it can be that the stitching module is configured to calculate, when the platforms are in the back-to-back configuration, extrinsic parameters for one of the two cameras based on extrinsic parameters acquired for the other of the two cameras. In some of the above embodiments, the stitching can include transforming the content of at least one of the two images so as to improve the alignment between the two images.

In some of the above embodiments, it can be that the computing of updated image- alignment calibration data for each of the two cameras takes place at a first time, and the stitching of a pair of images to form a 360° panoramic image takes place at a second time. In some such embodiments, the second time can be substantially the same time as the first time, such that a computing of updated image-alignment calibration data is performed before each stitching. In some such embodiments, the second time can be substantially later than the first time, such that more than one stitching is performed between a pair of sequential computings of updated image-alignment calibration data.

In some of the above embodiments, it can be that the stitching module is configured to perform a computing of updated image- alignment calibration data or a part of the computing, and the performing is at least one of: triggered by, subject to, or in response to, a condition being fulfilled. In some such embodiments, the performing of a computing of updated image-alignment calibration data or of a part of a computing, by the stitching module, can be triggered by, subject to, or in response to multiple conditions being fulfilled. In some such embodiments, the stitching module can be configured to perform a manually-initiated computing of updated image- alignment calibration data when triggered by a user input. In some such embodiments, the stitching module can be configured to perform a computing of updated image-alignment calibration data when triggered by a user input even if a condition is not fulfilled. In some such embodiments, the stitching module can be further configured to do one of: (i) display a visual warning and (ii) make a warning sound. In some such embodiments, the user input can be received via one of a graphical user interface and a physical button.

In some of the above embodiments, it can be that a. the stitching module is configured to

(i) perform a user-initiated computing in response to a user input received by the stitching module after a stitched pair of images is displayed as a first 360° panoramic image, and

(ii) perform an additional stitching of the pair of images, using image-alignment calibration data updated by the user-initiated computing; and b. the video-display- controller is configured to cause the stitched pair of images to be displayed as a second 360° panoramic image in accordance with the additional stitching. In some such embodiments, the video-display-controller can be configured to provide a toggling display mode for toggling back-and-forth between the first and second 360° panoramic images. It can be that the user-initiated computing reduces at least one stitching error. It can be that the additional stitching improves the alignment between the two images.

In some of the above embodiments, it can be that the stereo image synthesizing module is configured to analyze calibration-target images of the onboard calibration target acquired by the two cameras in the side-by-side configuration, and calculate changes in relative rotation and/or translation data for each respective photodetector plane relative to the other, based on the analyzing. In some such embodiments, the computing of the updated image-alignment calibration data can include computing, based on the calculated changes, updated image-alignment calibration data that are modified from previous image-alignment calibration data stored in a computer-readable storage medium. In some such embodiments, the calculated changes can be based on at least one of a twist angle between the respective photodetector planes and a non-twist deviation from co-planarity of the respective photodetector planes, detected or calculated when the pivot assembly is in the side-to-side configuration. The previously acquired image-alignment calibration data can be baseline image-alignment calibration data provided with the camera apparatus. The computing of the updated image-alignment calibration data can include comparing a commonly-viewable feature of the onboard calibration target as it appears in each of the calibration-images acquired by the two cameras when in the side-by-side configuration, with stored information about the feature.

In some of the above embodiments, it can be that the computing of updated image- alignment calibration data for each of the two cameras takes place at a first time, and the synthesizing of a 3D stereoscopic image from a pair of images takes place at a second time. In some such embodiments, the second time can be substantially the same time as the first time, such that a computing of updated image-alignment calibration data is performed before each synthesizing. In some such embodiments, the second time can be substantially later than the first time, such that more than one synthesizing is performed between a pair of sequential computings of updated image-alignment calibration data. In some of the above embodiments, it can be that the stereo image synthesizing module is configured to perform a computing of updated image-alignment calibration data or a part of the computing, and the performing is at least one of: triggered by, subject to, or in response to, a condition being fulfilled. In some such embodiments, the performing of a computing of updated image-alignment calibration data or of a part of a computing, by the stereo image synthesizing module, can be triggered by, subject to, or in response to multiple conditions being fulfilled.

In some of the above embodiments, it can be that the stereo image synthesizing module is configured to perform a manually-initiated computing of updated image- alignment calibration data when triggered by a user input. In some embodiments, the stereo image synthesizing module can be configured to perform a computing of updated image- alignment calibration data when triggered by a user input even if a condition is not fulfilled. In some such embodiments, the stereo image synthesis module can be further configured to do one of: (i) display a visual warning and (ii) make a warning sound. In some embodiments, the user input can be received via one of a graphical user interface and a physical button.

In some of the above embodiments, it can be that a. the stereo image synthesizing module is configured to (i) perform a user-initiated computing in response to a user input received by the stereo image synthesizing module after a combined pair of images is displayed as a first stereoscopic image, and (ii) perform an additional stereo synthesis of the pair of images, using image-alignment calibration data updated by the user-initiated computing; and b. the video-display-controller is configured to cause the combined pair of images to be displayed as a second stereoscopic image in accordance with the additional stereo synthesis. In some such embodiments, the video-display-controller can be configured to provide a toggling display mode for toggling back-and-forth between the first and second stereoscopic images. In some such embodiments, the user-initiated computing can do at least one of: increasing stereo overlap and reducing parallax error.

In some of the above embodiments, the 180°+ 3D stereoscopic image can have an angle - of-view of at least 180°. In embodiments disclosing a condition, the condition can include one of: reaching a number of image acquisitions since the last computing, and reaching a number of manipulations of the platforms between the side-by-side configuration and the back-to- back configuration since the last computing. In embodiments disclosing a condition, the condition can include receiving, by the stitching module in embodiments in which the imaging apparatus includes one, a measurement from an onboard accelerometer, the measurement exceeding a predetermined limit. In embodiments disclosing a condition, the condition can include the onboard calibration target or an operative portion thereof not being occulted to the view of the one of the two cameras to which it is viewable when not occulted. In embodiments disclosing a condition, the condition can include the orientation of the imaging apparatus being within a predetermined range of a specified orientation. In embodiments disclosing a condition, the condition can include the pivot assembly being in a specific configuration.

In some of the above embodiments, a configuration sensor can include at least one of an accelerometer and a gyroscope. In some embodiments, the configuration sensor can include an accelerometer and a gyroscope. In some embodiments, the accelerometer can be a part of an inertial measurement unit.

In some of the above embodiments, the imaging apparatus can additionally comprise an onboard or external non-transitory, computer-readable storage medium that includes previously acquired or calculated or baseline image-alignment calibration data for at least one of the two cameras for at least when the pivot assembly is in the back-to-back configuration.

In some of the above embodiments, the imaging apparatus can additionally comprise a cropping module for reducing respective angles-of-view of images acquired by the two cameras when the pivot assembly is in the side-by-side configuration, wherein the respective post-cropping effective angle-of-view of each image is selected so as to exclude at least the respective other ultra-wide-angle camera.

Embodiments relate to camera devices for producing complex images such as panoramic and stereoscopic images, and methods for their use. In embodiments, a dual-mode stereoscopic/panoramic camera apparatus featuring asymmetric view-cropping comprises: (a) first and second ultra wide-angle cameras for acquiring first and second digital images of a scene; (b) a multi-configuration housing assembly to which the first and second ultra wide-angle cameras are mounted such that (i) when the housing assembly is in a side-by-side configuration, the first and second cameras are substantially co-oriented and laterally displaced so as to define: (A) an intermediating region-of-space between the first and second cameras and bounded by optical axes of the first and second cameras; and (B) first and second exterior regions-of- space respectively bounded by the optical axes of the first and second cameras, wherein the first exterior region-of-space is local with respect to the first camera and remote with respect to the second camera, and the second exterior region-of-space is local with respect to the second camera and remote with respect to the first camera; and (ii) when the housing assembly is in a back-to-back configuration, the first and second cameras are oriented in substantially opposite directions. The camera apparatus also comprises (c) image -processing circuitry providing a 180°+ asymmetric cropping mode for reducing respective angles-of-view of images acquired by the first and second cameras when the housing assembly is in the side-by-side configuration, so that for each of the first and second digital images: (i) a post-cropping effective angle-of-view is equal to a, where a equals or exceeds 180°; (ii) an extent of cropping from a respective remote exterior region-of-space exceeds an extent of cropping from a respective local exterior region-of- space; and (iii) an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space. The camera apparatus also comprises: (d) a video-display-controller operatively linked to the image- processing circuitry, the video-display-controller for controlling the display, on an onboard or external display device, of the images acquired by the first and second cameras such that: (i) when the multi-configuration housing assembly is in the side-by- side configuration, the video-display-controller causes the first and second images to be displayed in a stereoscopic mode where each image is cropped by the image -processing circuitry according to the 180°+ asymmetric cropping mode such that a post-cropping effective angle-of-view of each of the first and second images is equal to a and (ii) when the multi-configuration housing assembly is in the back-to-back configuration, the video- display-controller causes the first and second images to be displayed in a panorama mode such that the first and second images each individually provide an angle of view greater than a and the first and second images are stitched together to collectively provide 360° of view.

In some embodiments, the video-display-controller can be configured so that when the multi-configuration housing assembly is in the back-to-back configuration, each of the first and second images are displayed without cropping.

In some embodiments, the video-display-controller can be configured so that when the multi-configuration housing assembly is in the back-to-back configuration, each of the first and second images are displayed after cropping to an angle of view b where b> a. Cropping can be asymmetric or symmetric.

In some embodiments, the image-processing circuitry can additionally provide a panoramic image processing mode for selectively reducing respective angles-of-view of images acquired by the first and second cameras when the housing assembly is in the back-to-back configuration, such that a post-cropping effective angle-of-view of each image equals or exceeds a. The panoramic image processing mode can include a mode for not cropping images acquired when the housing assembly in the back-to-back configuration. The panoramic image processing mode can include a mode for symmetrically cropping images acquired when the housing assembly in the back-to-back configuration. The panoramic image processing mode can include a mode for asymmetrically cropping images acquired when the housing assembly in the back-to-back configuration.

In some embodiments, the respective post-cropping effective angle-of-view of each image acquired by the first and second cameras when the housing assembly is in a side-by-side configuration can be selected so as to exclude at least the respective other camera. In some embodiments, the respective post-cropping effective angle-of-view of each image acquired by the first and second cameras when the housing assembly is in a side-by-side configuration can be selected so as to include substantially the entire portion of the scene that comprises stereo overlap. In some embodiments, the respective post cropping effective angle-of-view of each image acquired by the first and second cameras when the housing assembly is in a side-by-side configuration can be selected so as to exclude at least the housing assembly.

In some embodiments, digital images can comprise still photographs or video content.

In some embodiments, the camera apparatus can additionally comprise an electronic communication connection with a non-transitory computer-readable storage medium for the storage of images, and software instructions, which when executed by the image -processing circuitry, causes the image -processing circuitry to store, in the non- transitory storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; a panoramic image resulting from stitching the two images together; and a stereoscopic image in a format readable by a 3D image display device.

In embodiments, a camera apparatus for acquiring two images substantially simultaneously and forming therefrom a complex image is disclosed, where the form of the complex image is user-selectable from a group consisting of panoramic and stereoscopic images. The camera apparatus comprises (a) a housing assembly comprising two pivotably joined camera platforms operable to be user-manipulated, alternatively to a substantially co-planar side-by-side configuration or to a back-to-back configuration, the choice of configuration being effective to make a user-selection between a panoramic image and a stereoscopic image; (b) two ultra- wide- angle cameras, each camera installed on one of the two camera platforms such that in the side-by-side configuration the two cameras face in the same direction, and in the back-to-back configuration the two cameras face in opposite directions; (c) electronic circuitry configured to asymmetrically crop the two images responsive to the choice of configuration being the side-by-side configuration, such that (i) a post-cropping effective angle-of-view of each cropped image is equal to a, where a equals or exceeds 180° and (ii) at least the cameras and the camera platforms are excluded from the cropped images; and (d) a display device at least operable to display a complex image formed by (i) stitching the two acquired images together to form a panoramic image or (ii) cropping and combining the two acquired images together to form a stereoscopic image. In some embodiments, the electronic circuitry can be additionally configured, responsive to the choice of configuration being the back-to-back configuration, to do one of: (i) asymmetrically crop at least one of the two images; (ii) symmetrically crop at least one of the two images; and (iii) not crop either of the images.

In some embodiments, the camera apparatus can additionally comprise a computer-readable non-transitory storage medium for storing the acquired images and/or the formed complex image in a format readable by the display device.

In some embodiments, when cropping and combining to form the stereoscopic image, the extent of the cropping can be selected so that a majority of the portion of the scene that comprises stereo overlap is included in each of the cropped images.

A method of forming complex images from images acquired using a camera apparatus comprising two camera platforms and two ultra-wide-angle cameras respectively installed thereupon is disclosed wherein the two camera platforms are pivotably connected to each other and operable to be user-manipulated alternatively to a substantially co-planar side-by-side configuration or to a back-to-back configuration, the complex images being user-selectable from a group consisting of panoramic and stereoscopic images. The method comprises: (a) acquiring two images, substantially simultaneously, using the two cameras, the two camera platforms being user-manipulated to the side-by-side configuration for a stereoscopic image or to the back-to-back configuration for a panoramic image; (b) using electronic circuitry of the camera apparatus, cropping the two acquired images such that (i) in response to a user selection of a stereoscopic image, the cropping is applied (A) asymmetrically and (B) such that at least the cameras and the camera platforms are excluded from the cropped images and (ii) the post-cropping effective angle-of-view of each of the two cropped acquired images is equal to a, where a equals or exceeds 180°; and (c) performing at least one of: (i) displaying, on a display device of the camera apparatus, a complex image formed by combining the two cropped acquired images, and (ii) storing, in a non-transitory computer-readable storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; and a complex image formed by combining the two cropped acquired images. In some embodiments, cropping the two acquired images can also be such that: (iii) in response to a user selection of a panoramic image, the cropping is applied: (A) asymmetrically for at least one of the two images; (B) symmetrically for at least one of the two images; and (C) not for either of the images.

In some embodiments, in response to a user selection of a stereoscopic image, storing the complex image can include storing a stereoscopic image in a format readable by a 3D image display device. In some embodiments, response to a user selection of a stereoscopic image, displaying the complex image can include displaying a stereoscopic image in a stereoscopic format.

A method of forming complex images from images acquired using a camera apparatus comprising two camera platforms and two ultra-wide-angle cameras respectively installed thereupon is disclosed, wherein the two camera platforms are pivotably connected to each other and operable to be user-manipulated alternatively to a substantially co-planar side-by-side configuration or to a back-to-back configuration, the complex images being user-selectable from a group consisting of panoramic and stereoscopic images, the method comprising: (a) acquiring two images, substantially simultaneously, using the two cameras, the two camera platforms being user-manipulated to the side-by-side configuration for a stereoscopic image or to the back-to-back configuration for a panoramic image; (b) using electronic circuitry of the camera apparatus, selecting a cropping mode in response to the user manipulation of the camera platforms and thereupon cropping the two acquired images in accordance with the selected cropping mode, the cropping mode selected from the group comprising: (i) a 180°+ asymmetric cropping mode for asymmetrically cropping two images acquired in the side-by-side configuration, such that (A) a post-cropping effective angle-of-view of each cropped image is equal to a, where a equals or exceeds 180° and (B) at least the cameras and the camera platforms are excluded from the cropped images, and (ii) a panoramic image processing mode for selectively cropping two images acquired in the back-to-back configuration, such that (i) the cropping is applied: (A) asymmetrically for at least one of the two images; (B) symmetrically for at least one of the two images; and (C) not for either of the images; and (ii) a post-cropping effective angle-of-view of each cropped image equals or exceeds a and (c) performing at least one of: (i) displaying, on a display device of the camera apparatus, a complex image formed by combining the two cropped acquired images, and (ii) storing, in a non-transitory computer-readable storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; and a complex image formed by combining the two cropped acquired images.

In embodiments, a dual-mode stereoscopic/panoramic camera apparatus featuring asymmetric view-cropping comprises: (a) first and second ultra wide-angle cameras for acquiring first and second digital images of a scene; and (b) a multi-configuration housing assembly to which the first and second ultra wide-angle cameras are mounted such that: (i) when the housing assembly is in a side-by-side configuration, the first and second cameras are substantially co-oriented and laterally displaced so as to define: (A) an intermediating region-of-space between the first and second cameras and bounded by optical axes of the first and second cameras; and (B) first and second exterior regions-of- space respectively bounded by the optical axes of the first and second cameras, wherein the first exterior region-of-space is local with respect to the first camera and remote with respect to the second camera, and the second exterior region-of-space is local with respect to the second camera and remote with respect to the first camera; and (ii) when the housing assembly is in a back-to-back configuration, the first and second cameras are oriented in substantially opposite directions. The camera apparatus also comprises: (c) image -processing circuitry providing a 180°+ asymmetric cropping mode for reducing respective angles-of-view of images acquired by the first and second cameras when the housing assembly is in a side-by-side configuration, so that for each of the first and second digital images: (i) a post-cropping effective angle-of-view equals or exceeds 180°; (ii) an extent of cropping from a respective remote exterior region-of-space exceeds an extent of cropping from a respective local exterior region-of-space; and (iii) an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space. The camera apparatus also comprises: (d) a video-display-controller operatively linked to the image -processing circuitry, the video- display-controller for controlling the display, on an onboard or external display device, of the images acquired by the first and second cameras such that: (i) when the multi- configuration housing assembly is in the back-to-back configuration, the video-display- controller causes the first and second images to be displayed in panorama mode such that the first and second images each individually provide at least 180° of view and the first and second images are stitched together; and (ii) when the multi-configuration housing assembly is in the side-by-side configuration, the video-display-controller causes the first and second images to be displayed in a stereoscopic mode where each image is cropped by the image-processing circuitry according to the 180°+ asymmetric cropping mode.

In embodiments, a dual-mode stereoscopic/panoramic camera apparatus can feature asymmetric view-cropping, and can comprise: (a) first and second ultra wide- angle cameras for acquiring first and second digital images of a scene; and (b) a multi configuration housing assembly to which the first and second ultra wide-angle cameras are mounted such that: (i) when the housing assembly is in a side-by-side configuration, the first and second cameras are substantially co-oriented and laterally displaced so as to define: (A) an intermediating region-of-space between the first and second cameras and bounded by optical axes of the first and second cameras; and (B) first and second exterior regions-of-space respectively bounded by the optical axes of the first and second cameras, wherein the first exterior region-of-space is local with respect to the first camera and remote with respect to the second camera, and the second exterior region-of-space is local with respect to the second camera and remote with respect to the first camera; and (ii) when the housing assembly is in a back-to-back configuration, the first and second cameras are oriented in substantially opposite directions. The dual-mode

stereoscopic/panoramic camera apparatus can additionally comprise (c) image -processing circuitry providing a 180°+ asymmetric cropping mode for reducing respective angles-of- view of images acquired by the first and second cameras when the housing assembly is in the side-by-side configuration, so that for each of the first and second digital images: (i)

(i) a post-cropping effective angle-of-view is equal to a, where a equals or exceeds 180°;

(ii) an extent of cropping from a respective remote exterior region-of-space exceeds an extent of cropping from a respective local exterior region-of-space; and (iii) an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space. In some embodiments, the camera apparatus can additionally comprise a video- display-controller operatively linked to the image-processing circuitry, the video-display- controller for controlling the display, on an onboard or external display device, of the images acquired by the first and second cameras such that: (i) when the multi- configuration housing assembly is in the side-by-side configuration, the video-display- controller causes the first and second images to be displayed in a stereoscopic mode where each image is cropped by the image-processing circuitry according to the 180°+ asymmetric cropping mode such that a post-cropping effective angle-of-view of each of the first and second images is equal to a and (ii) when the multi-configuration housing assembly is in the back-to-back configuration, the video-display-controller causes the first and second images to be displayed in a panorama mode such that the first and second images each individually provide an angle of view greater than a and the first and second images are stitched together to collectively provide 360° of view. The video-display- controller can be configured so that when the multi-configuration housing assembly is in the back-to-back configuration, each of the first and second images are displayed without cropping. The video-display-controller can be configured so that when the multi configuration housing assembly is in the back-to-back configuration, each of the first and second images are displayed after cropping to an angle of view b where b> a.

In some embodiments, the respective post-cropping effective angle-of-view of each image acquired by the first and second cameras when the housing assembly is in a side-by-side configuration can be selected so as to exclude at least the respective other camera. In some embodiments, the respective post-cropping effective angle-of-view of each image acquired by the first and second cameras when the housing assembly is in a side-by-side configuration can be selected so as to include substantially the entire portion of the scene that comprises stereo overlap. In some embodiments, the respective post cropping effective angle-of-view of each image acquired by the first and second cameras when the housing assembly is in a side-by-side configuration can be selected so as to exclude at least the housing assembly.

The digital images can comprise still photographs or video content. In some embodiments, the camera apparatus can additionally comprise an electronic communication connection with a non-transitory computer-readable storage medium for the storage of images, and software instructions, which when executed by the image -processing circuitry, causes the image -processing circuitry to store, in the non- transitory storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; a panoramic image resulting from stitching the two images together; and a stereoscopic image in a format readable by a 3D image display device.

Embodiments are disclosed for a camera apparatus for acquiring two images substantially simultaneously and forming therefrom a complex image, where the form of the complex image is user-selectable from a group consisting of panoramic and stereoscopic images. In the embodiments, the camera apparatus can comprise: (a) a housing assembly comprising two pivotably joined camera platforms operable to be user- manipulated, alternatively to a substantially co-planar side-by-side configuration or to a back-to-back configuration, the choice of configuration being effective to make a user- selection between a panoramic image and a stereoscopic image; (b) two ultra-wide-angle cameras, each camera installed on one of the two camera platforms such that in the side- by-side configuration the two cameras face in the same direction, and in the back-to-back configuration the two cameras face in opposite directions; and (c)electronic circuitry configured to asymmetrically crop the two images responsive to the choice of configuration being the side-by-side configuration, such that (i) a post-cropping effective angle-of-view of each cropped image is equal to a, where a equals or exceeds 180° and (ii) at least the cameras and the camera platforms are excluded from the cropped images.

In some embodiments, the camera apparatus can additionally comprise a display device at least operable to display a complex image formed by (i) stitching the two acquired images together to form a panoramic image or (ii) cropping and combining the two acquired images together to form a stereoscopic image.

In some embodiments, the electronic circuitry can be additionally configured, responsive to the choice of configuration being the back-to-back configuration, to do one of: (i) asymmetrically crop at least one of the two images; (ii) symmetrically crop at least one of the two images; and (iii) not crop either of the images.

In some embodiments, the camera apparatus can additionally comprise a computer-readable non-transitory storage medium for storing the acquired images and/or the formed complex image in a format readable by the display device.

In some embodiments, when cropping and combining to form the stereoscopic image, the extent of the cropping can be selected so that a majority of the portion of the scene that comprises stereo overlap is included in each of the cropped images.

A method is disclosed, according to embodiments, of forming complex images from images acquired using a camera apparatus comprising two camera platforms and two ultra-wide-angle cameras respectively installed thereupon, the two camera platforms pivotably connected to each other and operable to be user-manipulated alternatively to a substantially co-planar side-by-side configuration or to a back-to-back configuration, the complex images being user-selectable from a group consisting of panoramic and stereoscopic images. The method can comprise (a) acquiring two images, substantially simultaneously, using the two cameras, the two camera platforms being user-manipulated to the side-by-side configuration for a stereoscopic image or to the back-to-back configuration for a panoramic image; (b) using electronic circuitry of the camera apparatus, cropping the two acquired images such that (i) in response to a user selection of a stereoscopic image, the cropping is applied (A) asymmetrically and (B) such that at least the cameras and the camera platforms are excluded from the cropped images and (ii) the post-cropping effective angle-of-view of each of the two cropped acquired images is equal to a, where a equals or exceeds 180°; and (c) performing at least one of: (i) displaying, on a display device, a complex image formed by combining the two cropped acquired images, and (ii) storing, in a non-transitory computer-readable storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; and a complex image formed by combining the two cropped acquired images.

In some embodiments, cropping the two acquired images can also be such that: (iii) in response to a user selection of a panoramic image, the cropping is applied: (A) asymmetrically for at least one of the two images; (B) symmetrically for at least one of the two images; and (C) not for either of the images.

In some embodiments, in response to a user selection of a stereoscopic image, storing the complex image can include storing a stereoscopic image in a format readable by a 3D image display device. In some embodiments, in response to a user selection of a stereoscopic image, displaying the complex image can include displaying a stereoscopic image in a stereoscopic format.

A method is disclosed, according to embodiments, of forming complex images from images acquired using a camera apparatus comprising two camera platforms and two ultra-wide-angle cameras respectively installed thereupon, the two camera platforms pivotably connected to each other and operable to be user-manipulated alternatively to a substantially co-planar side-by-side configuration or to a back-to-back configuration, the complex images being user-selectable from a group consisting of panoramic and stereoscopic images. The method can comprise (a) acquiring two images, substantially simultaneously, using the two cameras, the two camera platforms being user-manipulated to the side-by-side configuration for a stereoscopic image or to the back-to-back configuration for a panoramic image; (b) using electronic circuitry of the camera apparatus, selecting a cropping mode in response to the user manipulation of the camera platforms and thereupon cropping the two acquired images in accordance with the selected cropping mode, the cropping mode selected from the group comprising: (i) a 180°+ asymmetric cropping mode for asymmetrically cropping two images acquired in the side-by-side configuration, such that (A) a post-cropping effective angle-of-view of each cropped image is equal to a, where a equals or exceeds 180° and (B) at least the cameras and the camera platforms are excluded from the cropped images, and (ii) a panoramic image processing mode for selectively cropping two images acquired in the back-to-back configuration, such that (i) the cropping is applied: (A) asymmetrically for at least one of the two images; (B) symmetrically for at least one of the two images; and (C) not for either of the images; and (ii) a post-cropping effective angle-of-view of each cropped image equals or exceeds a. The method can additionally comprise performing at least one of (i) displaying, on a display device, a complex image formed by combining the two cropped acquired images, and (ii) storing, in a non-transitory computer-readable storage medium, any or all of: the first and second images before cropping; the first and second images after cropping; and a complex image formed by combining the two cropped acquired images.

In embodiments, a dual-mode stereoscopic/panoramic camera apparatus can feature asymmetric view-cropping, and can comprise: (a) first and second ultra wide- angle cameras for acquiring first and second digital images of a scene; and (b) a multi configuration housing assembly to which the first and second ultra wide-angle cameras are mounted such that: (i) when the housing assembly is in a side-by-side configuration, the first and second cameras are substantially co-oriented and laterally displaced so as to define: (A) an intermediating region-of-space between the first and second cameras and bounded by optical axes of the first and second cameras; and (B) first and second exterior regions-of-space respectively bounded by the optical axes of the first and second cameras, wherein the first exterior region-of-space is local with respect to the first camera and remote with respect to the second camera, and the second exterior region-of-space is local with respect to the second camera and remote with respect to the first camera; and (ii) when the housing assembly is in a back-to-back configuration, the first and second cameras are oriented in substantially opposite directions. The camera apparatus can additionally comprise (c) image -processing circuitry providing a 180°+ asymmetric cropping mode for reducing respective angles-of-view of images acquired by the first and second cameras when the housing assembly is in a side-by-side configuration, so that for each of the first and second digital images: (i) a post-cropping effective angle-of-view equals or exceeds 180°; (ii) an extent of cropping from a respective remote exterior region-of-space exceeds an extent of cropping from a respective local exterior region-of- space; and (iii) an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space.

In some embodiments, the camera apparatus can additionally comprise a video- display-controller operatively linked to the image-processing circuitry, the video-display- controller for controlling the display, on an onboard or external display device, of the images acquired by the first and second cameras such that: (i) when the multi- configuration housing assembly is in the back-to-back configuration, the video-display- controller causes the first and second images to be displayed in panorama mode such that the first and second images each individually provide at least 180° of view and the first and second images are stitched together; and (ii) when the multi-configuration housing assembly is in the side-by-side configuration, the video-display-controller causes the first and second images to be displayed in a stereoscopic mode where each image is cropped by the image-processing circuitry according to the 180°+ asymmetric cropping mode.

In embodiments, an imaging apparatus 500 can comprise an elongated handle portion 510 having a longitudinally-oriented central axis 900 defining a vertical y direction, and a camera platform assembly 501 comprising (i) first and second camera platforms 540, (ii) first and second ultra wide-angle cameras 520 respectively installed on the first and second camera platforms 540, each ultra wide-angle camera 520 comprising a respective ultra wide-angle lens 530, and (iii) a pivot element 542 joining the first and second camera platforms 540 and defining a pivot axis 910, the pivot element 542 being arranged so as to enable the pivoting, with respect to the elongated handle portion 510, of each of the first and second camera platforms 540 between first and second platform configurations. In the first platform configuration, the first and second camera platforms 540 can be arranged back-to-back, the first and second ultra wide-angle cameras 520 facing in opposite directions, and in the second platform configuration, the first and second camera platforms 540 can be arranged side-by-side transversely to the elongated handle portion 510, so as to define a camera platform plane 904 that is orthogonal to a central-axis-pivot-axis (CA-PA) plane 920 joining the pivot axis 910 and the central axis 900 of the elongated handle portion, the first and second ultra wide-angle cameras 520 facing in the same direction and displaced from each other on either side of the CA-PA plane 920.

In some embodiments, the pivot axis 910 can be parallel to the central axis 900 of the handle portion 510. the pivoting between the first and second platform configurations by each of the respective camera platforms 540 defines a quarter-circle arc 905. In some embodiments, the first and second ultra wide-angle cameras 520 can be aligned with each other at the same height in the vertical direction within a tolerance of one-half of a diameter 908 of either of the respective ultra wide-angle lenses 530. The first and second ultra wide-angle cameras 520 are aligned with each other at the same height in the vertical direction within a tolerance of one-fifth of a diameter 908 of either of the respective ultra wide-angle lenses 530. The first and second ultra wide-angle cameras 520 are aligned with each other at the same height in the vertical direction within a tolerance of 1.5 mm or a tolerance or one millimeter or a tolerance of 0.5 mm.

In some embodiments, in the second platform configuration, both the first and second camera platforms 540 can be offset from the central axis 900 of the elongated handle portion 510, in a direction in which the first and second ultra wide-angle cameras 520 are facing. In some embodiments, neither one of the first and second camera platforms 540 is rigidly attached to the elongated handle portion 510.

In some embodiments, in the first platform configuration, the first and second ultra wide-angle cameras 520 can also be aligned with each other in a horizontal x direction that is parallel to the CA-PA plane 920 and orthogonal to the vertical y direction, within a tolerance of one-half of a diameter 908 of either of the respective ultra wide-angle lenses 530. In the first platform configuration, the first and second ultra wide- angle cameras 520 can also be aligned with each other in a horizontal x direction that is parallel to the CA-PA plane 920 and orthogonal to the vertical y direction, within a tolerance of one-fifth of a diameter 908 of either of the respective ultra wide-angle lenses 530. In the first platform configuration, the first and second ultra wide-angle cameras 520 can also be aligned with each other in a horizontal x direction that is parallel to the CA PA plane 920 and orthogonal to the vertical y direction, within a tolerance of 1.5 mm or a tolerance or one millimeter or a tolerance of 0.5 mm.

In embodiments, an imaging apparatus 500 can comprise a rigid element comprising an elongated handle portion 510, and a camera platform assembly 501 comprising (i) first and second camera platforms 540, (ii) first and second ultra wide- angle cameras 520 respectively installed in the first and second camera platforms 540, and (iii) a pivot element 542 joining the first and second camera platforms 540 and arranged so as to enable the pivoting, with respect to the elongated handle portion 510, of each of the first and second camera platforms 540 between first and second platform configurations. The imaging apparatus 500 can additionally comprise a docking element 573 arranged so as to restrain at least one of the first and second camera platforms 540 from pivoting from the first platform configuration to the second platform configuration, and a release mechanism 507, which, when activated by a user, causes the docking element 573 to release said at least one of the first and second camera platforms 540 from being restrained from pivoting from the first platform configuration to the second platform configuration.

In some embodiments, it can be that the elongated handle portion 510 defines a central longitudinal axis 900, the pivot element 542 defines a pivot axis 910, and in the second platform configuration, platforms 540 are arranged side-by-side transversely to the elongated handle portion 510, so as to define a camera platform plane 904 that is orthogonal to a central-axis-pivot-axis (CA-PA) plane 920 joining the pivot axis 910 and the central axis 900 of the elongated handle portion, the first and second ultra wide-angle cameras 520 facing in the same direction and displaced from each other on either side of the CA-PA plane 920.

In some embodiments, the imaging apparatus 500 can additionally comprise a docking frame 570 defining an interior volume 571. The docking element 573 can be attached or installed in or on the docking frame 570. In some embodiments, the docking element 573 can be attached or installed in or on the elongated handle portion 510. The docking frame 570 can be rigidly mechanically coupled to the elongated handle portion 510.

In some embodiments, in the first platform configuration, at least one of the first and second camera platforms 540 can be disposed within the interior volume 571 of the docking frame 570, such that the docking frame 570 encloses the at least one of the first and second camera platforms 540 on at least three sides. In some embodiments, both of the first and second camera platforms 540 can be disposed within the interior volume 571 of the docking frame 570, such that the docking frame encloses both of the first and second camera platforms 540 on at least three sides.

In some embodiments, the release mechanism 507 can be mounted or installed on the elongated handle portion 510. In some embodiments, the release mechanism 507 can be mounted or installed on the docking frame 570. The docking element 573 can be attached or installed on an interior- volume-facing surface 569 of the docking frame 570.

In some embodiments, the restraining of said at least one of the first and second camera platforms 540 from pivoting from the first platform configuration to the second platform configuration can include restraining against a persistent force. In some embodiments, the releasing of the first and second camera platforms 540 can include engaging or disengaging a spring. In some embodiments, the releasing of the first and second camera platforms 540 can include transferring a force to the first and second camera platforms 540 so as to assist a pivoting from the first platform configuration to the second platform configuration. In some embodiments, it can be that the first and second camera platforms 540 are mechanically coupled so as to rotate in tandem with each other. The transferring a force to the first and second camera platforms 540 can cause the first and second camera platforms 540 to pivot in tandem with each other. The persistent force can cause the first and second camera platforms 540 to pivot in tandem with each other in response to activation of the release mechanism 507.

In some embodiments, the pivoting between the first and second platform configurations by each of the camera platforms can define 540 a quarter-circle arc 905. The docking frame can include an internal member 574 that blocks at least one of the first and second camera platforms 540 from pivoting through an arc 905 of more than a quarter-circle.

In embodiments, an imaging apparatus 500 can comprise a rigid assembly comprising an elongated handle portion 510 and a docking frame 570, and a camera platform assembly 501 comprising: (i) first and second camera platforms 540, (ii) first and second ultra wide-angle cameras 520 respectively installed in the first and second camera platforms 540, each ultra wide-angle camera 520 comprising a respective ultra wide-angle lens 530, and (iii) a pivot element 542 joining the first and second camera platforms 540 and arranged so as to enable the pivoting, with respect to the elongated handle portion 510, of each of the first and second camera platforms 540 between first and second platform configurations. In the first platform configuration, the first and second camera platforms 540 can be arranged back-to-back and are disposed within the docking frame, such that the docking frame 570 encloses the first and second camera platforms 540 on at least three sides, and in the second platform configuration, the first and second camera platforms 540 can be arranged side-by-side. The imaging apparatus 500 can additionally comprise a plurality of electronic display elements 578 disposed on one or more surfaces of the rigid assembly.

In some embodiments, the plurality of electronic display elements 578 can include a mode indicator. The mode indicator can be one of a video mode indicator, a stills mode indicator, and a time lapse mode indicator. In some embodiments, the plurality of electronic display elements 578 can include a status indicator. The status indicator can be one of a power status indicator, a recording status indicator, a processing status indicator and a communications status indicator. The plurality of electronic display elements 578 can be disposed on one or more surfaces of the docking frame 570. The plurality of electronic display elements 578 can be disposed on one or more surfaces of the elongated handle portion 510. It can be that at least one of the plurality of electronic display elements 578 is disposed on a surface of the docking frame 570 and at least one of the plurality of electronic display elements 578 is disposed on a surface of the elongated handle portion 510.

In embodiments, an imaging apparatus 500 can comprise an elongated handle portion 510 having a longitudinally-oriented central axis 900 defining a vertical y direction, and a camera platform assembly 501 comprising (i) first and second camera platforms 540, (ii) first and second ultra wide-angle cameras 520 respectively installed on the first and second camera platforms 540, and (iii) a pivot element 542 joining the first and second camera platforms 540 and being arranged so as to enable the pivoting, with respect to the elongated handle portion 510, of each of the first and second camera platforms 540 between first and second platform configurations. In the first platform configuration, the first and second camera platforms 540 can be arranged back-to-back, the first and second ultra wide-angle cameras 520 facing in opposite directions and aligned with each other at the same height in the vertical direction within a tolerance of one -half of a diameter 908 of either of the respective ultra wide-angle lenses 530. In the second platform configuration, the first and second camera platforms 540 can be are arranged side-by-side. Each respective ultra wide-angle camera 520 has an angle-of-view of at least 200°.

In some embodiments, each respective ultra wide-angle camera 520 can have an angle-of-view of at least 205°. In some embodiments, each respective ultra wide-angle camera 520 can have an angle-of-view of at least 210°.

In some embodiments, the first and second ultra wide-angle cameras 520 can be aligned with each other at the same height in the vertical direction within a tolerance of one-fifth of a diameter 908 of either of the respective ultra wide-angle lenses 530. The first and second ultra wide-angle cameras 520 can be aligned with each other at the same height in the vertical direction within a tolerance of one millimeter.

In some embodiments, the pivoting between the first and second platform configurations by each of the respective camera platforms can define a quarter-circle arc

905.

In embodiments, an imaging apparatus 500 can comprise a rigid assembly comprising an elongated handle portion 510 and a docking frame 570, the elongated handle portion 510 having a longitudinally-oriented central axis 900, and a camera platform assembly 501 comprising: (i) first and second camera platforms 540, (ii) first and second ultra wide-angle cameras 520 respectively installed on the first and second camera platforms 540, each ultra wide-angle camera 520 comprising a respective ultra wide-angle lens 530, and (iii) a pivot element 542 joining the first and second camera platforms 540 and defining a pivot axis 910, the pivot element 542 being arranged so as to enable the pivoting, with respect to the elongated handle portion 510, of each of the first and second camera platforms 540 between (i) a first platform configuration in which the first and second camera platforms 540 are arranged back-to-back and the first and second ultra wide-angle cameras 520 face in opposite directions, and (ii) a second platform configuration in which the first and second camera platforms 540 are arranged side-by-side. The docking frame 570 can be bisected by the longitudinally-oriented central axis 900 of the elongated handle portion 510, and respective centerlines 911 of the two ultra-wide-angle lenses 530, when the camera platforms are in the first platform configuration, can be laterally offset from the longitudinally-oriented central axis 900 of the elongated handle portion 510 in the direction of the pivot axis 910.

In some embodiments, the docking frame 570 can be bisected by the

longitudinally-oriented central axis 900 of the elongated handle portion 510 in a first dimension parallel to a central-axis-pivot-axis (CA-PA) plane 920 joining the pivot axis 910 and the central axis 900 of the elongated handle portion. The docking frame 570 can be bisected by the longitudinally-oriented central axis 900 of the elongated handle portion 510 in a second dimension orthogonal to central-axis-pivot-axis (CA-PA) plane 920 joining the pivot axis 910 and the central axis 900 of the elongated handle portion.

In some embodiments, the respective centerlines 911 can be offset from the longitudinally-oriented central axis 900 by a thickness of a member of the docking frame 570. The respective centerlines 911 can be offset from the longitudinally-oriented central axis 900 by at least one millimeter. The respective centerlines 911 can be offset from the longitudinally-oriented central axis 900 by at least two millimeters. The respective centerlines 911 can be offset from the longitudinally-oriented central axis 900 by at least three millimeters.

In some embodiments, the respective centerlines 911 can be offset from the longitudinally-oriented central axis 900 is at least one millimeter (for example, between 1 and 5 millimeters), or at least two millimeters (for example, between 2 and 5

millimeters), or at least three millimeters (for example, between 3 and 5 millimeters). In some embodiments, the respective centerlines 911 can be offset from the longitudinally- oriented central axis 900 by a value that is expressed as a percentage of an internal width 913 of frame 570. For example, the respective centerlines 911 can be offset can be offset by at least 3% (e.g. between 3% and 12%) or at least 5% (e.g. between 5% and 12% or between 5% and 10%) or at least 6% (e.g. between 6% and 9%) or at least 7% (e.g. between 7% and 8%) of the internal width 913 of frame 570. In different examples, the respective centerlines 911 can be offset can be offset by at most 15% or 12% or at most 10% or at most 8% of the internal width 913 of frame 570.

The respective centerlines 911 can be laterally offset from the longitudinally- oriented central axis 900 by offset 912, where offset 912 is equal to at least 5% and at most 10% of an internal width 913 of the docking frame 570. The respective centerlines 911 can be laterally offset from the longitudinally-oriented central axis 900 by offset 912, where offset 912 is equal to at least 6% and at most 9% of an internal width 913 of the docking frame 570. The respective centerlines 911 can be laterally offset from the longitudinally-oriented central axis 900 by offset 912, where offset 912 is equal to at least 7% and at most 8% of an internal width 913 of the docking frame 570.

In embodiments, an imaging apparatus 500 can comprise an elongated handle portion 510 having a longitudinally-oriented central axis 900 defining a vertical y direction, and a camera platform assembly 501 comprising (i) first and second camera platforms 540, (ii) first and second ultra wide-angle cameras 520 respectively installed on the first and second camera platforms 540 and defining first and second optical axes 901, and (iii) a pivot element 542 joining the first and second camera platforms 540 and defining a pivot axis 910, the pivot element 542 being arranged so as to enable the pivoting, with respect to the elongated handle portion 510, of each of the first and second camera platforms 540 between (i) a first platform configuration in which the first and second camera platforms 540 are arranged back-to-back and the first and second ultra wide-angle cameras 520 face in opposite directions, and (ii) a second platform

configuration in which the first and second camera platforms 540 are arranged side-by- side. The imaging apparatus 500 can additionally comprise a frame member 570

disposed, such that when the first and second camera platforms 540 are in the first platform configuration, (i) the frame member 570 encloses the first and second camera platforms 540 on at least three sides, and (ii) movement of the first and second camera platforms 540 in the y direction is restricted by the frame member 570.

In some embodiments, the movement of first and second camera platforms 540 in the y direction can be restricted by the frame member 570 to be within a range of no more than 2 millimeters. The movement of first and second camera platforms 540 in the y direction can be restricted by the frame member 570 to be within a range of no more than 1 millimeter.

In some embodiments, in the second platform configuration, the first and second camera platforms 540 can be arranged side-by-side transversely to the elongated handle portion 510, the first and second ultra wide-angle cameras 520 facing in the same direction and displaced from each other on either side of a central-axis-pivot-axis (CA PA) plane 920 joining the pivot axis 910 and the central axis 900 of the elongated handle portion.

In some embodiments, the frame member 570 can define a frame plane 903 parallel to first and second optical axes 901, wherein the first and second camera platforms 540 are arranged side-by-side transversely to the elongated handle portion 510 so as to define a camera platform plane 904 that is orthogonal to the frame plane 903.

In embodiments, an imaging apparatus 500 can comprise: (a) first and second camera platforms 540; (b) first and second ultra wide-angle cameras 520 respectively installed on the first and second camera platforms 540, each ultra wide-angle camera 520 comprising a respective ultra wide-angle lens 530; (c) a frame member 570 defining an interior volume 571 and a frame plane 903; (d) a pivot element 542 joining the first and second camera platforms 540 and defining a pivot axis 910, the pivot element 542 being arranged so as to enable the pivoting, with respect to the frame member 570, of each of the first and second camera platforms 540 between (i) a first platform configuration in which the first and second camera platforms 540 are arranged back-to-back and the first and second ultra wide-angle cameras 520 face in opposite directions, and (ii) a second platform configuration in which the first and second camera platforms 540 are arranged side-by-side so as to define a camera platform plane 904 that is orthogonal to the frame plane 903, wherein the first and second camera platforms 540 displaced from each other on either side of the frame plane 903; and (e) a connection element 548 adapted for electronic communication between the imaging apparatus 500 and an external device 700 having a display screen 701.

In some embodiments, the connection element 548 can include a female connector.

In some embodiments, the first and second ultra-wide-angle cameras 520 can be operable to acquire images of a scene in response to control commands received from the external device 700. In some embodiments, the pivoting between the first and second platform configurations by each of the respective camera platforms 540 can define a quarter-circle arc 905.

In some embodiments, in the second platform configuration, both the first and second camera platforms 540 can be offset from the central axis 900 of the elongated handle portion 510, in a direction in which the first and second ultra wide-angle cameras 520 are facing.

In some embodiments, the imaging apparatus 500 can additionally comprise a docking element 573 arranged so as to restrain at least one of the first and second camera platforms 540 from pivoting from the first platform configuration to the second platform configuration, and a release mechanism 507, which, when activated by a user, causes the docking element 573 to release said at least one of the first and second camera platforms 540 from being restrained from pivoting from the first platform configuration to the second platform configuration. The docking element 573 can be attached or installed on an interior-volume-facing surface 569 of the frame member 570 In some embodiments, the external device 700 can comprise a cellphone. The display screen 701 of the external device 700 can be operative to display indications of mode and status for the imaging apparatus 500. The display screen 701 of the external device 700 can be operative to receive inputs from a user for operation of the imaging apparatus 500.

In embodiments, as imaging apparatus 500 can comprise: (a) first and second camera platforms 540; (b) first and second ultra wide-angle cameras 520 respectively installed on the first and second camera platforms 540, each ultra wide-angle camera 520 comprising a respective ultra wide-angle lens 530; (c) a frame member 570 defining an interior volume 571 and a frame plane 903; (d) a pivot element 542 joining the first and second camera platforms 540 and defining a pivot axis 910, the pivot element 542 being arranged so as to enable the pivoting, with respect to the frame member 570, of each of the first and second camera platforms 540 between (i) a first platform configuration in which the first and second camera platforms 540 are arranged back-to-back and the first and second ultra wide-angle cameras 520 face in opposite directions, and (ii) a second platform configuration in which the first and second camera platforms 540 are arranged side-by-side so as to define a camera platform plane 904 that is orthogonal to the frame plane 903, wherein the first and second camera platforms 540 displaced from each other on either side of the frame plane 903; and (e) an attachment arrangement 549 adapted for rigid mechanical attachment to, and electronic communication with, an external device 700 having a display screen 701.

In some embodiments, the attachment arrangement 549 can reside on an outer surface of the frame 570. In some embodiments, the attachment arrangement 549 can include a female connector.

In some embodiments, the external device 700 can comprise a cellphone. The first and second ultra- wide- angle cameras 520 can be operable to acquire images of a scene in response to control commands received from the external device 700. The display screen 701 of the external device 700 is operative to display indications of mode and status for the imaging apparatus 500. The display screen 701 of the external device 700 can be operative to receive inputs from a user for operation of the imaging apparatus 500.

In some embodiments, the pivoting between the first and second platform configurations by each of the respective camera platforms 540 can define a quarter-circle arc 905. In some embodiments, in the second platform configuration, both the first and second camera platforms 540 are offset from the central axis 900 of the elongated handle portion 510, in a direction in which the first and second ultra wide-angle cameras 520 are facing. In some embodiments, the imaging apparatus 500 can additionally comprise a docking element 573 arranged so as to restrain at least one of the first and second camera platforms 540 from pivoting from the first platform configuration to the second platform configuration, and a release mechanism 507, which, when activated by a user, causes the docking element 573 to release said at least one of the first and second camera platforms 540 from being restrained from pivoting from the first platform configuration to the second platform configuration. The docking element 573 can be attached or installed on an interior-volume-facing surface 569 of the frame member 570.

In embodiments, an imaging apparatus 500 can comprise an elongated handle portion 510 having a longitudinally-oriented central axis 900 defining a vertical y direction, a frame member 570 defining an interior volume 571, and a camera platform assembly 501 comprising (i) first and second camera platforms 540, (ii) first and second ultra wide-angle cameras 520 respectively installed on the first and second camera platforms 540 and defining first and second optical axes 901, and (iii) a pivot element 542 joining the first and second camera platforms 540 and defining a pivot axis 910, the pivot element 542 being arranged so as to enable the pivoting, with respect to the frame member 570, of each of the first and second camera platforms 540 between (i) a first platform configuration in which the first and second camera platforms 540 are arranged back-to-back and the first and second ultra wide-angle cameras 520 face in opposite directions, and (ii) a second platform configuration in which the first and second camera platforms 540 are arranged side-by-side. The elongated handle portion 510 can include first portions 580 proximate to the camera platform assembly 501 and second portions 581 more distant from the pivot assembly than the first portions, each of the first portions 580 intercepting a wider angle-of-view range with respect to the first and second ultra- wide-angle cameras 520, than the second portions 581.

In some embodiments, each respective one of the first portions 580 can have an orientation that is greater than 0° and less than 90° with respect to the vertical y direction. It can be that the slope is not constant over the height, in the vertical y direction, of the respective first portions 580.

In some embodiments, the frame member 570 can define a frame plane 903 parallel to first and second optical axes 901, wherein the first and second camera platforms 540 are arranged side-by-side transversely to the elongated handle portion 510 so as to define a camera platform plane 904 that is orthogonal to the frame plane 903.

In some embodiments, the pivoting between the first and second platform configurations by each of the respective camera platforms 540 can define a quarter-circle arc 905. In some embodiments, in the second platform configuration, both the first and second camera platforms 540 can be offset from the central axis 900 of the elongated handle portion 510, in a direction in which the first and second ultra wide-angle cameras 520 are facing. In some embodiments, the imaging apparatus 500 can additionally comprise a docking element 573 arranged so as to restrain at least one of the first and second camera platforms 540 from pivoting from the first platform configuration to the second platform configuration, and a release mechanism 507, which, when activated by a user, causes the docking element 573 to release said at least one of the first and second camera platforms 540 from being restrained from pivoting from the first platform configuration to the second platform configuration. The docking element 573 can be attached or installed on an interior- volume-facing surface 569 of the frame member 570.

In embodiments, an imaging apparatus 500 can comprise a rigid element comprising an elongated handle portion 510, and a camera platform assembly 501 comprising: (i) first and second camera platforms 540, (ii) first and second ultra wide- angle cameras 520 respectively installed in the first and second camera platforms 540, and (iii) a pivot element 542 joining the first and second camera platforms 540 and arranged so as to enable the pivoting, with respect to the elongated handle portion 510, of each of the first and second camera platforms 540 between first and second platform configurations. The imaging apparatus 500 can additionally comprise a biasing element 543 that transfers a persistent force urging both of the first and second camera platforms 540 to pivot from the first platform configuration to the second platform configuration.

In some embodiments, the biasing element 543 can comprise a spring. The biasing element 543 can comprise a torsion spring. In some embodiments, the imaging apparatus 500 can additionally comprise a docking element 573 arranged so as to restrain at least one of the first and second camera platforms 540 from pivoting from the first platform configuration to the second platform configuration, and a release mechanism 507, which, when activated by a user, causes the docking element 573 to release said at least one of the first and second camera platforms 540 from being restrained from pivoting from the first platform configuration to the second platform configuration.

In some embodiments, the restraining of said at least one of the first and second camera platforms 540 from pivoting from the first platform configuration to the second platform configuration can include restraining against a persistent force. In some embodiments, the releasing of the first and second camera platforms 540 can include engaging or disengaging a spring.

In some embodiments, the first and second camera platforms 540 can be mechanically coupled so as to rotate in tandem with each other. In some embodiments, the persistent force can cause the first and second camera platforms 540 to pivot in tandem with each other in response to activation of the release mechanism 507. In some embodiments, the pivoting between the first and second platform configurations by each of the respective camera platforms can define 540 a quarter-circle arc 905.

In some embodiments, the imaging apparatus 500 can additionally comprise a docking frame 570 defining an interior volume 571. The docking element 573 can be attached or installed in or on the docking frame 570. Both of the first and second camera platforms 540 can be disposed within the interior volume 571 of the docking frame 570, such that the docking frame encloses both of the first and second camera platforms 540 on at least three sides. In some embodiments, the docking frame 570 can include an internal member 574 that blocks at least one of the first and second camera platforms 540 from pivoting through an arc 905 of more than a quarter-circle.

In any of the embodiments disclosed herein, an imaging apparatus 500 can additionally comprise a configuration detector for detecting whether the first and second camera platforms are in the first platform configuration or the second platform configuration. In any of the embodiments disclosed herein, an imaging apparatus 500 can additionally comprise a dewarp module for dewarping images acquired by the first and second ultra- wide-angle cameras 520.

In any of the embodiments disclosed herein, an imaging apparatus 500 can additionally comprise a stitching module for creating a 360° panoramic image of a scene viewed collectively by the first and second cameras 520 when in the first platform configuration.

In any of the embodiments disclosed herein, an imaging apparatus 500 can additionally comprise a communications arrangement for transmitting, to an external device: (i) for images acquired in the first platform configuration, a 360° panoramic image stitched from a pair of dewarped images, and (ii) for images acquired in the second platform configuration, either a pair of dewarped images or a stereoscopic synthesis thereof.

In any of the embodiments disclosed herein, an imaging apparatus 500 can additionally comprise a biasing element 543 that transfers a persistent force urging both of the first and second camera platforms 540 to pivot from the first platform

configuration to the second platform configuration. The biasing element can comprise a torsion spring.

In any of the embodiments disclosed herein, each respective ultra wide-angle camera 520 can have an angle-of-view of at least 200°. In any of the embodiments disclosed herein, each respective ultra wide-angle camera 520 can have an angle-of-view of at least 205°. In any of the embodiments disclosed herein, each respective ultra wide- angle camera 520 can have an angle-of-view of at least 210°.

In any of the embodiments disclosed herein, at least a surface portion of each respective lens 530 can occupy a portion of the area of each of the first and second camera platforms 540, and the portion occupied by the at least a surface portion of each respective lens 530 can be at least 20% of the area of each of the first and second camera platforms 540. In any of the embodiments disclosed herein, the portion occupied by the at least a surface portion of each respective lens 530 can be at least 25% of the area of each of the first and second camera platforms 540. In any of the embodiments disclosed herein, the portion occupied by the at least a surface portion of each respective lens 530 can be at least 30% of the area of each of the first and second camera platforms 540.

In any of the embodiments disclosed herein, it can be that the aspect ratio of the largest dimension of either one of the first and second camera platforms 540 divided by the second-largest dimension thereof is not more than 2.0. In any of the embodiments disclosed herein, it can be that the aspect ratio of the largest dimension of either one of the first and second camera platforms 540 divided by the second-largest dimension thereof is not more than 1.5. In any of the embodiments disclosed herein, it can be that the aspect ratio of the largest dimension of either one of the first and second camera platforms 540 divided by the second-largest dimension thereof is not more than 1.25. In any of the embodiments disclosed herein, it can be that the aspect ratio of the largest dimension of either one of the first and second camera platforms 540 divided by the second-largest dimension thereof is not more than 1.1.

In any of the embodiments disclosed herein, each one of the first and second camera platforms 540 can comprise a pivot attachment member 541, and each respective ultra-wide-angle lens 530 can be centered on the portion of the camera platform 540 that is not the pivot attachment member 541.

In any of the embodiments disclosed herein wherein an elongated handle portion 510 is disclosed, the elongated handle portion 510 can include a substantially flat bottom portion 511, such that when the imaging apparatus 500 is oriented vertically with the camera platform assembly 501 above the elongated handle portion 510, the imaging apparatus 500 can stand unattended on a flat surface.

In any of the embodiments disclosed herein wherein an elongated handle portion 510 is disclosed, the elongated handle portion 510 can include a cellphone. In any of the embodiments disclosed herein, the imaging apparatus 500 can be a part of a kit 461 that additionally comprises 3D glasses 460. A kit 461 can comprise the imaging apparatus 500 of any preceding claim and a pair of 3D glasses 460.

BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 shows respective (A) top, (B) bottom, (C) front and (D) rear views in a second platform configuration of an imaging apparatus according to embodiments.

Fig. 2 shows respective (A) front-left perspective, (B) left-side, and (C) left-rear perspective views, and (D) a detail of the left-rear perspective view in a second platform configuration of the imaging apparatus of Fig. 1. Fig. 3 shows respective (A) left-side with (Al) detail showing the offset of a camera centerline from a centerline of a frame, (B) rear, (C) bottom and (D) left-rear perspective views, all in the first platform configuration, of the imaging apparatus of Figs. 1 and 2.

Fig. 4 shows additional respective (A) top, (B) rear and (C) left-side views in the second platform configuration of the imaging apparatus of Figs. 1-3. Fig. 5 shows respective (A) top, (B) left-side and (C) rear views, and (D) a detail of a left-rear perspective view, all in the second platform configuration, of an imaging apparatus according to alternative embodiments.

Fig. 6 shows a rear view in the first platform configuration, of the imaging apparatus of Fig. 5.

Fig. 7 shows the Fig. 6 rear, first platform configuration view of the imaging apparatus of Figs. 5-6, together with an external device in electronic communication therewith.

Fig. 8 shows the Fig. 6 rear, first platform configuration view of the imaging apparatus of Figs. 5-6, mechanically connected to an external device.

Fig. 9 shows (A) a second platform configuration view and (B) a first platform configuration view of an imaging apparatus with schematic representation of angles-of- view of respective ultra-wide-angle cameras, according to embodiments.

Fig. 10 shows (A) an imaging apparatus and external storage and display devices, and connections therebetween according to embodiments, and (B) a kit containing an imaging apparatus and stereoscopic glasses according to embodiments.

Fig. 11 shows four examples of prior art calibration target patterns.

Fig. 12 shows schematic left-rear perspective and front views of a camera apparatus in a back-to-back configuration with one or more close onboard calibration targets according to embodiments.

Fig. 13 schematic shows front and left-side views a camera apparatus in a side-by- side configuration with a far onboard calibration target according to embodiments.

Fig. 14 shows schematic front and left-side views of a camera apparatus in a side- by-side configuration with a far onboard calibration target, and one or more close onboard calibration targets, according to embodiments. Fig. 15 shows schematic left-side and front views of a camera apparatus in a back-to-back configuration and a far onboard calibration target according to

embodiments.

Fig. 16 is a schematic front-view of a camera apparatus in a back-to-back configuration and folding telescopic calibration surfaces with respective far onboard calibration targets according to an embodiment.

Fig. 17 is a schematic front view of a camera apparatus in a back-to-back configuration and extensible calibration surfaces with respective far onboard calibration targets according to an embodiment. Fig. 18 is a schematic perspective view of a camera apparatus and a support stand, with far calibration targets on respective support members of the support stand, according to an embodiment.

Fig. 19 is a schematic perspective view of a camera apparatus and a support stand, with support members of the support stand suitable to function as far calibration targets, according to an embodiment.

Fig. 20 shows a block system diagram of a camera apparatus and related systems and components, according to embodiments.

Fig. 21 is a schematic rear view of a camera apparatus in a side-by-side configuration, with onboard display screens on the rear of respective camera platforms, according to an embodiment.

Fig. 22 is a schematic front view of a camera apparatus, with an external display device and an external storage medium, according to embodiments.

Fig. 23 is a schematic drawing of left-rear perspective and front views of a camera apparatus in a back-to-back configuration, each showing respective photodetector planes of the two onboard cameras, according to embodiments.

Fig. 24 is a schematic drawing of a left-rear perspective view of a camera apparatus in a side-by-side configuration, showing respective photodetector planes of the two onboard cameras and respective Euler angles, according to embodiments. Fig. 25 is a schematic illustration of stitching two images acquired in a back-to- back configuration so as to create a panoramic image, according to the known prior art.

Fig. 26 is a schematic illustration of combining two images acquired in the side- by-side configuration so as to synthesize a 3D stereoscopic image, according to the known prior art.

Fig. 27 is a schematic illustration of potential defects in images acquired by a camera apparatus in the back-to-back configuration, due to rotation through each of the Euler angles of one of the two cameras of the camera apparatus, according to

embodiments. Fig. 28 is a schematic illustration of misalignment of stereo images synthesized from pairs of images acquired by a camera apparatus in the side-by-side configuration, resulting from rotation through each of the Euler angles of one of the two cameras of the camera apparatus, according to embodiments.

Fig. 29 is a schematic illustration showing improvement, following camera re- calibration, in the results of stitching images of Fig. 21, according to embodiments.

Fig. 30 is a schematic illustration showing improvement, following camera re calibration, in the results of synthesizing a stereo image from a pair of images of Fig. 22, according to embodiments.

Fig. 31 shows a flowchart of a method for stitching images acquired by a camera apparatus in the back-to-back configuration to create panoramic images, according to embodiments.

Fig. 32 shows a flowchart of a method for synthesizing stereo images from pairs of images acquired by a camera apparatus in the side-by-side configuration, according to embodiments. Fig. 33 shows a flowchart of a method for manually-initiated calibrations of a camera apparatus in connection with stitched panoramic images, according to embodiments. Fig. 34 shows a flowchart of a method for manually-initiated calibrations of a camera apparatus in connection with synthesized stereo images, according to

embodiments.

Figs. 35A and 35B are schematic illustrations of a platform assembly in a side- by-side configuration and in a back-to-back configuration, respectively, according to embodiments.

Fig. 36 is a schematic two-dimensional representation of angles-of-view and corresponding regions-of-space respective of two cameras in a camera apparatus according to embodiments. Fig. 37 is a schematic two-dimensional representation of angles-of-view of the two cameras of Fig. 36 in the absence of other components of the camera apparatus.

Fig. 38 is a schematic two-dimensional representation of angles-of-view of the two cameras of Fig. 36 showing the effective angle-of-view of a single camera before cropping, according to embodiments. Fig. 39 shows four schematic representations of cropping lines of hemispherical images according to embodiments,

Fig. 40 shows the drawing of Fig. 38 while additionally showing a cropping angle for an image acquired by a single camera, according to embodiments.

Fig. 41 is a schematic two-dimensional representation of angles-of-view of the two cameras of Fig. 36, showing cropping angles for both cameras, according to embodiments.

Fig. 42 shows the representation of Fig. 37 while additionally showing the area of stereo overlap between the angles-of-coverage of the two cameras, according to embodiments. Fig. 43 is a schematic two-dimensional representation of effective post-cropping angles of view and area of stereo overlap of the two cameras based on the‘cropping angle’ of Fig. 40, according to embodiments. Fig. 44 is a block diagram of a camera apparatus according to embodiments.

Fig. 45 is a flow chart of an imaging technique.

Figs. 46A-46H illustrate first and second platform assemblies with cameras disposed thereon. Fig. 47 explains a problem solved by embodiments of the invention (e.g. for four cameras in a L1-R1-L2-R2 configuration).

Fig. 48A shows the angle numbering convention used for Figs. 47-55.

Fig. 48B shows four objects in a scene.

Figs. 49A-49D illustrate a human reference viewer RV in different orientations. Fig. 50A shows two human viewers

Fig. 50B and 52A-5B show four cameras disposed in a L1-R1-L2-R2 configuration.

Figs. 51A-51D and 52C illustrates ranges of view for the L1-R1-L2-R2 camera configuration of Fig. 50B. Figs. 53A-53C illustrate visibility of a function of angle.

Fig. 54A-54C illustrate generating stereoscopic images according to a first stitching technique.

Fig. 54D illustrate the objects of Figs. 48 as generate according to the first stitching technique. Fig. 54E show stereo strength effect as a function of angle for an illustrative example according to the first stitching technique.

Fig. 55A-55B illustrate generating stereoscopic images according to a secpmd stitching technique.

Figs. 55C-55E illustrate generating stereoscopic images according to a second stitching technique. Fig. 55E show stereo strength effect as a function of angle for an illustrative example according to the second stitching technique.

Figs. 56, 57A-57B, 58A-58B relate to examples where the embodiments ‘asymmetric cropping’ embodiments (see, for example, Figs. 36-44) are combined with (ii) teachings related to L1-R1-L2-R2 embodiments

DETAILED DESCRIPTION OF THE EMBODIMENTS

The invention is herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice. Throughout the drawings, like-referenced characters are generally used to designate like elements.

Embodiments of the present invention relate to an imaging apparatus comprising first and second onboard ultra- wide- angle cameras. Ultra-wide-angle cameras can be digital cameras, i.e., cameras with imaging sensors such as charge-coupled devices

(CCD) or complementary metal oxide semiconductor (CMOS) devices, equipped with ultra-wide-angle lenses and/or operative to process ultra-wide images, for example to convert ultra-wide images into‘normal’ images with reduced distortion. An example of an ultra-wide-angle lens is a fisheye lens. As is known in the art, images acquired using a fisheye lens can be processed so as to render images with reduced distortion by using software routines or firmware routines known as‘de-warping’ routines. Ultra-wide-angle lenses can be integral to cameras, factory installed, or installed after-market.

In embodiments, a dual-mode imaging apparatus can operate in either of two operating modes: When the platforms are manipulated to a back-to-back configuration, the imaging apparatus can operate in a panoramic mode. The back-to-back configuration is also called the“first platform configuration” in this disclosure and the terms are used interchangeably. When first and second onboard camera platforms bearing respective ultra-wide-angle cameras are manipulated to a side-by-side configuration, the imaging apparatus can operate in a stereoscopic mode. The side-by-side configuration is also called the“second platform configuration” in this disclosure, and the terms are used interchangeably.

An imaging apparatus can be equipped with a configuration sensor or configuration detector which is operative to detect the configuration (i.e., first platform configuration or second platform configuration) and, upon detection, cause the imaging apparatus to automatically switch to the appropriate operating mode. A configuration sensor can include a position detector, as it known in the art to deploy, for example Hall- effect transducers, or reed switches, or limit switches, etc., for accurate sensing of absolute or relative position. In some embodiments, the manipulation of the camera platforms can be assisted, for example by a release switch and spring, when moving from one configuration to another. In some embodiments, an imaging apparatus may automatically disable shutter function when not in one of the two abovementioned configurations, and in other embodiments, an imaging apparatus can select an operating mode based on which of the two configurations is closer to its instant situation or, alternatively, based on the last valid configuration.

When in stereoscopic (or“stereo”) mode, images acquired by the first and second ultra-wide-angle cameras (also called interchangeably,“cameras” in this disclosure, i.e., all“cameras” in this disclosure are ultra-wide-angle cameras in that they have respective ultra-wide-angle lenses) can be combined and presented (e.g. as a live video stream) on a display screen in 3D stereo mode. They can also (or alternatively) be captured and stored in a computer-readable storage medium in an image format. One example is a 3D image, for example, when viewed using, for example, stereoscopic glasses or on an autostereo display.

When in a panorama mode, images acquired by the first and second cameras are stitched together and presented (e.g. as a live video stream) to present a panorama of a scene viewable by the camera device (e.g. on opposite sides thereof).

In embodiments, the imaging apparatus can be configured to perform periodic, occasional, or user-initiated calibration (or re-calibration) so as to improve the results of creating complex (panoramic or stereoscopic) images. In some embodiments, a camera apparatus can perform calibration based on a condition being fulfilled or not fulfilled. For example, a calibration can be triggered when a camera apparatus reaches any kind of milestone, such as for example: a given number of hours of operation, a given number of times that camera platforms are manipulated between side-by-side and back-to-back configurations, a given number of still and/or video images acquired, or a given length of time since the last calibration. Any of the milestones and other conditions can be pre programmed in the camera apparatus. In other examples, a camera apparatus can be prevented from performing a calibration until a preventing condition ceases to exist, such as for example: a hand or other object covering one or more calibration target, the camera apparatus not being in a desired configuration or orientation for calibration, inadequate storage for calibration-target images, or insufficient battery power. A camera apparatus can perform a first kind of calibration when the cameras are in the back-to-configuration and a second kind of calibration when the cameras are in the side-by-side configuration, and can maintain different go or no-go conditions (including milestone counters) for the two kinds of calibrations and can store separate calibration data including intrinsic and extrinsic parameters for each of the two kinds of calibration.

According to embodiments, an imaging apparatus can perform a calibration in the side-by-side configuration as part of a stitching operation carried out by an onboard stitching module for creating panoramic images from two acquired images, and can perform a calibration in the back-to-back configuration as part of a stereo-image synthesizing operation carried out by an onboard stereo image synthesis module. A dewarping module can be used for dewarping images acquired with ultra-wide-angle cameras such as fisheye cameras, or cameras with fisheye lenses. Algorithms and procedures for dewarping fisheye images are known in the art. A stitching module and a stereo image synthesis module can include any combination of hardware (e.g., one or more processors, non-transitory computer-readable storage) and software (including program instructions for performing calibrations) necessary to carry out the respective functions of stitching pairs of images to create panoramic images and combining pairs of images to synthesize stereoscopic images. The stitching module and stereo image synthesis module can share hardware and/or software as necessary.

Any combination of a dewarping, module, a stitching module and a stereo image synthesis module can share hardware, firmware and/or software as necessary.

Calibration can reduce stitching errors and/or improve image alignment in the process of stitching images to create panoramic images, for example 360° images.

Calibration can increase stereo overlap and/or reduce parallax error when synthesizing images to generate 3D stereoscopic images. A calibration target, which can include more or more operative target portions for calibration, can be included onboard the imaging apparatus. For example, in a side-by-side (stereo) configuration, a single calibration target placed on an elongated handle of the imaging apparatus, for example near the far end of handle from the cameras, where it can be imaged simultaneously by both cameras. In another example, in a back-to-back (panorama) configuration, one or more operative different portions of a calibration target can be viewable by each of the two cameras. A calibration target can be provided on the‘far’ end of an elongated handle also for use in a back-to-back configuration (where each camera sees a different operative portion or different calibration target). Additionally or alternatively, a calibration target can be relatively close to a camera, for example on a sloping section or shelf section of the top of the handle that presents the calibration target to a camera in a manner that takes up a larger proportion of the camera’s angle-of-view that would the target at the‘far’ end of the handle - not just because it is closer but because the slope or shelf causes the target to be presented to the camera at a less‘shallow’ angle.

In order to be able to use the calibration targets provided onboard the imaging apparatus, it can be beneficial to provide ultra-wide-angle cameras with capabilities of imaging well‘below the horizon,’ i.e., substantially greater than 180°, for example greater than 205° or more.

Referring now to the figures, and in particular to Figs. 1 and 2 in combination, various projection views and perspective views of a dual-mode imaging apparatus 500 arranged in a‘side-by-side’ configuration (the second platform configuration) are shown according to embodiments. By‘side-by-side’ throughout this disclosure it is meant that first and second cameras 520L, 520R are arranged substantially side-by-side, i.e., as illustrated, for example, in the perspective view of Fig. 2-A. When side-by-side, respective optical axes 910L, 901R are parallel to each other and face in the same direction (i.e., although offset by the distance therebetween).

Note: Throughout this disclosure, subscripted reference numbers (e.g., 10i) or letter-modified reference numbers (e.g., lOOa) may be used to designate multiple separate appearances of elements in a single drawing, e.g. 10i is a single appearance (out of a plurality of appearances) of element 10, and likewise lOOa is a single appearance (out of a plurality of appearances) of element 100. Similarly, 200 L and 200 R would designate, respectively, left-side and right-side appearances of an element 200.

The left-right convention used in the figures and in the discussion of the figures is as follows: When an imaging apparatus in a side-by-side configuration, as disclosed in the embodiments, and is held by a user so as to point the two cameras away from him as in the orientation illustrated in the example of Figure l-D, the left side of the imaging apparatus is to the user’s left, and the right side of the imaging apparatus is to the user’s right. In this way, the respective orientation of left-side camera 520L and right-side camera 520R correspond to the orientation of the user’s left and right eyes. This convention is maintained for the various elements so referenced regardless of their orientation in other examples and figures.

As shown in Figs. l-C and l-D, in respective front and rear views, the two camera platforms 540 when in the second platform configuration are arranged transversely to the handle 510. This gives the imaging apparatus 500 a general T-shape (or, alternatively, a cross shape) when viewed from the front or rear. When viewed from the side, the crossbar of the T or cross is offset from the staff.

In preferred embodiments, in the second platform configuration, first and second cameras 520L, 520R are arranged so as to be co-planar, and vertically aligned with each other. This alignment is designed so as to give optimal results when acquiring pairs of images and subsequently synthesizing stereo images from them. Using the axes of Fig. 2- A, it can be seen that the two cameras 520L, 520R are substantially co-planar in the y-z plane and at the same height in the y- dimension; in other words, the two cameras are lined up so as to have substantially the same x- and y- coordinates, and only have different z- coordinates. By“substantially” it is meant within a tolerance of plus or minus 5% of a diameter 908 of a lens 530, or plus or minus 10% of the diameter 908, or plus or minus 20% or 25% or 50% of the diameter 908.

This arrangement can be beneficial in producing true stereo images from images acquired in the second platform (side-by-side) configuration. In some embodiments, the distance between the two cameras 520L, 520R (center-to-center) is selected so as to represent a typical distance between human eyes, known as‘pupillary distance’, and thereby emulate human 3D vision when synthesizing stereo images. For example, the range of pupillary distance in adults is known to be generally in the range of 54-74 mm. A value for the distance DCL shown in Fig. l-C between the respective centerlines 91lL(also called‘optical axes’) and 911R of the two cameras 520L, 520R (which are the same distance apart from each other as are the respective optical axes 901L, 901R as shown in Figs. 1A-1B as) may be selected to be in this range, or near the middle of the range.

Moreover, it can be beneficial to have respective cameras 520L, 520R on opposite sides of the handle 510, and opposite sides of frame 570, for better ergonomics and ‘extension’ of the left-right visual experience into the images acquired by the imaging apparatus 500.

As illustrated in the example of Fig. l-D, the imaging apparatus 500 comprises a pivot assembly 501 and an elongated handle portion 510. The handle portion 510 can be used to grasp the imaging apparatus in a hand, for which purpose the handle portion 510 is preferably elongated enough to be grasped comfortably and securely. For example, the handle portion 510 can be longer in in its longest dimension than the pivot assembly 501 in the same dimension. As another example, the length of elongated handle portion 510 can be selected as to be at least as long as more than half, or more than two-thirds, or more than 90% or more than 100% of the width of a typical human adult hand, which on average is known to be about 8 cm. As illustrated in Fig. l-C, the longest dimension of a handle portion 510 can define a longitudinal central axis 900, as shown, for example, in Figs. l-C and 2-B. The central axis 900 is parallel to the longest dimension of the handle, and is the longitudinal centerline (i.e., the centerline in this longest dimension) in the two dimensions that are not the longest dimension - as evidenced by Fig. 1 -C showing the central axis 900 in a front view and Fig. 2-B showing it in a left-side view.

The handle portion 510 can be used to stand up the imaging apparatus with substantial security on a flat surface so as to rigidly support the pivot assembly 501 above the handle portion 510, for which purpose the bottom portion 511 of the handle portion 510 is preferably flat. As is common with camera equipment, the bottom portion 511 can have an attachment arrangement 508, e.g., a threaded hole, for attachment of the camera apparatus 500 to an external support stand such as a tripod. The handle portion can include a number of user-oriented features such as user controls and a port cavity. In examples illustrated in Figs. 1 and 2, user controls 506, lens release button 507, shutter button 509, and port cavity 505 are shown. User controls 506 can include, for example, mode selection (e.g., still/video), wi-fi activation, initiation of a manual camera calibration (with or without overriding the presence or the absence of a condition blocking a non-manually-initiated calibration), and/or on-off control of the imaging apparatus. User controls 506 can also include status indicators using, for example, FEDs. Port cavity 505, preferably covered by a removable or pivotable cover, can include, for example, a charging port, a data transfer port (e.g., USB), and/or a slot for a data storage card.

Pivot assembly 501 includes two camera-loaded camera platforms 588L, 588R. Each camera-loaded camera platform 588 includes a camera platform 540 and a camera 520 installed therein or thereupon. (Note: In many places in this disclosure where component structure and/or orientation is discussed and the image-acquisition function of the camera is not relevant, and in particular for the following discussion about the pivot arrangements, the camera-loaded camera platform and 588 and the camera platform 540 are referred to interchangeably.)

Camera-loaded camera platforms 588L and 588R are pivotably attached to each other. The two camera-loaded camera platforms 588L, 588R are adapted to pivot open or closed so as to alternately place the imaging apparatus 500 respectively in a side-by-side configuration (by pivoting open in the direction of the arrow on arc 905R indicated in Fig. l-A) or in a back-to-back configuration (by pivoting closed in the direction indicated in the direction of the arrow on arc 905L in Fig. l-A). The pivoting of the camera-loaded camera platforms 588L, 588R can define a plane that is orthogonal to the longitudinal central axis of the handle 510. The extent of either arc 905L or 905R is preferably a quarter circle, i.e., 90°, or within the range 88° to 92°, or within the range 85° to 95°, or within the range 80° to 100°.

In embodiments, the two camera platforms 540L, 540R are directly attached to each other. In other embodiments, as in the illustrative, non-limiting example of Fig. l-C (and Fig. l-D, which shows a detail of Fig. l-C), the two camera platforms 540L, 540R are each pivotably attached to pivot assembly frame 570 and thus indirectly pivotably attached to each other. Each camera platform 540 can include a pivot member 541 adapted so as to complete a pivot arrangement together with the frame 570. A pivot member 541 can be in the form of a tab as in the aforementioned figures, or can be, for example, more flushly integrated with the respective parent camera platform 540. The size and strength of pivot member 541 and its manner of attachment should be sufficient so as to maintain the alignment of the camera platforms 540. The actual structure of the pivot mechanism employed is not relevant to the invention and for that reason is not illustrated. Examples include, but not exhaustively: a pivot mechanism that includes two pin extensions on each pivot member 541 and corresponding grooves in the frame 570; a pivot mechanism that includes two grooves on each pivot member 541 and corresponding pin extensions on the frame 570; a pivot mechanism that includes a separate pin member connecting each pivot member to the frame; or a pivot mechanism that includes a hinge member connecting each pivot member to the frame. Individually or collectively, any of the aforementioned pivot mechanisms that are not part of a camera platform 540 is referred to herein as a pivot element 542. A pivot element 542 can be part of a frame 570 or can be a separate component. As will be discussed later, it can be desirable to equip the pivot mechanism with a spring arrangement.

A pivot element 542, which is generally a vertical element in the y- direction, can define a pivot axis 910, as shown, for example, in Figs. 2-A and 2-D. Camera platforms 540 pivot about the pivot axis 910. Pivot axis 910 is preferably parallel to the central axis 900 of the elongated handle portion 510. As shown in Fig. 2-A, a central-axis-pivot-axis (CA-PA) plane 920 that joins the two planes 900, 910 bisects the imaging apparatus 500 along the x- direction. Thus, in the second platform configuration (side-by-side) the two respective camera platforms 540L, 540R are disposed on opposite side of the CA-PA plane 920.

In addition, it can be desirable to equip the pivot mechanism with a biasing element 543 (shown in Fig. 2-D) so as to“bias” the position of the camera platforms. By “bias” it is meant that a force or other design aspect makes the camera platforms move toward a specific platform configuration. For example, a biasing element 543 can comprise a torsion spring which can be used to transfer a persistent force to the camera platforms 540 so as to cause the camera platforms 540 to move from a first platform configuration (back-to-back) to a second platform configuration (side-by-side).

A pivot assembly frame 570 can serve as a docking frame for when the two camera platforms 540L, 540R are pivoted closed (in the direction indicated in Fig. 1 -A) so as to place the imaging apparatus 500 in a back-to-back configuration. In some embodiments (not shown), there can be two parallel docking frames 570L, 570R, each provided for docking a single respective one of the camera platforms 540L, 540R in the first platform configuration. The following discussion refers to a single docking frame 570 but can also be applied to the aforementioned embodiments employing two parallel docking frames 570L, 570R.

Referring specifically to Fig. 2B, a docking frame 570 is arranged so that in the first platform configuration, each of the camera platforms 540L, 540R is disposed within an interior volume 571. In such a configuration each of the camera platforms 540L, 540R is enclosed by a docking frame 570 on at least three sides (as illustrated in Fig. 3).

It can be useful to equip the docking frame 570 with an internal flange 574 facing inwards towards interior volume 571 which serves as a‘stopper’ mechanism for the respective camera platforms 540L, 540R so as to ensure their respective positions in the back-to-back mode, i.e., the internal flange 574 is an internal member, attached to interior-facing surface 569 of the frame 570. The internal member 574 blocks at least one of the first and second camera platforms 540 from pivoting through an arc 905 of more than a quarter-circle. It can be useful to equip the docking frame 570 with at least one docking element 573 that can act to maintain the respective closed positions of camera platforms 540 in the back-to-back configuration. In some embodiments, as shown in Fig. 2D, a docking element 573 is a tab on interior-facing surface 569 that can slide into a groove 544 (shown in Fig. 1B) on the bottom edge of each camera platform 540. In other embodiments (not illustrated), there can be a docking tab on the bottom edge of the camera platform and a groove on the frame. In some embodiments, the docking elements 573 can be withdrawn at least partially into the frame 570 so as to facilitate the quick release of camera platforms 540 from the back-to-back configuration, for example when lens release button 507 is depressed by a user. There can be two docking elements 573L, 573R. In alternative embodiments (not shown) the docking elements 573 can comprise arms or levers, attached either to a docking frame 570 or to an elongated handle portion 510, that when rotated can hold respective camera platforms 540L, 540R in place.

As further shown in Fig. 1 , each of the two camera platforms 540L, 540R can include respective microphones 521L, 521R for acquiring audio content along with, for example, video images. The two microphones 521L, 521R are preferably located as far as possible away from each other so as to enhance the apparent spatial separation of audio playback in at least two channels.

Fig. 3 includes side rear and bottom projections, and a perspective view, of an imaging apparatus 500 according to embodiments, where the imaging apparatus 500 shown in the side-by-side configuration in Figs. 1 and 2 is shown here in the back-to- back configuration (first platform configuration). As shown, a pivot assembly 501 can be designed so that the camera-loaded camera platforms 588L, 588R can be made to dock compactly within the inner volume 571 (shown in Fig. 2-B) of frame 570 such that only a portion of each camera 520 and microphone 521‘sticks out’ from the frame 570.

Referring to Fig. 3-A, pivot axis 910 is offset from central axis 900 of the handle 510, in the x- direction towards a side of the frame 570. Thus, when the camera platforms are in the second platform configuration, they, too are offset from the central axis 900 - in the direction in which the cameras are facing. One of the benefits of this arrangement is that less of the handle 510 is captured by the two cameras 520 than would be without the forward offset. Because of this offset, it can be seen in Fig. 3-A that the centerline 911L of camera 520L is offset from the central axis 900. In some embodiments, the offset 912 between centerline 911L of camera 520L and the central axis 900, illustrated in the example of Fig. 3-A1, is approximately equal to the thickness of the frame 570 on the side where the pivot element 542 and pivot axis 910 are. In some embodiments, the offset 912 is at least one millimeter, or at least two millimeters, or at least three millimeters. In some embodiments, the offset 912 is equal to a percentage of an internal width 913 of frame 570. For example, the offset 912 can be at least 5% and at most 10% of the internal width 913. For another example, the offset 912 can be at least 6% and at most 9% of the internal width 913. For yet another example, the offset 912 can be at least 7% and at most 8% of the internal width 913.

Fig 3-B shows that in the first platform configuration, the first and second ultra wide-angle cameras 520 can be aligned with each other in a horizontal direction that is orthogonal to the vertical y direction. The horizontal direction is parallel to the CA-PA plane 920 (shown“edge-on” in Fig. 3-B)

As can be seen in Fig. 3-D, a frame 570 can include electronically activated mode and status indicators 578. For example, mode and status indicators 578 can display whether the imaging apparatus 500 is currently set to acquire still images or video images, or has a time-lapse setting. In other examples, mode and status indicators 578 can display other information, such as whether the imaging apparatus is powered up, whether recording is currently underway, whether calibration of the camera is needed or recommended, whether processing of an image or a pair of images is underway, or whether communications are underway, or alternatively whether calibration of the camera is being prevented by a condition being met or not being met, as will be discussed later in this disclosure. In another example, mode indicators 578 can display information related to a manually-initiated calibration process.

Referring back to Figs. l-B and 2-C, it can be seen that an upper portion of the elongated handle 510, on each of two opposing faces of the elongated handle portion 510, is adapted to form sloping sections 580L, 580R. The sloping sections 580L, 580R, are near the end of the elongated handle portion 510 that is the end closer to the pivot assembly 501. One non-limiting reason for providing the sloping sections 580 is to reduce the extent by which the elongated handle portion occludes part of a scene being imaged by first and second ultra-wide-angle cameras 520L, 520R.

A number of views of imaging apparatus 500 are shown in Fig. 4 in order to further highlight certain aspects of its geometry and especially in the second platform configuration. Figs. 4-A and 4-B show frame plane 903“edge-on” as a straight line from two different orthogonal angles, thus establishing the two dimensions of the plane.

Similarly, Figs. 4-A and 4-C show camera platform plane 904“edge-on” as a straight line from two different orthogonal angles so as to establish the two dimensions of the plane. Frame plane 903 is co-planar with central axis 900 (not shown in Fig. 4) and bisects the frame 570 in the direction shown. Camera platform plane 904, which is intersected by two parallel cross-sections of the camera platforms 540L, 540R, is preferably orthogonal to frame plane 903.

We refer now to Figs. 5, 6, 7 and 8, where an alternative embodiment of an imaging apparatus 500 is illustrated, in which the elongated handle portion 510 of previously described embodiments is not provided. The imaging apparatus 500 is shown in the second platform configuration (camera platforms 540L, 540R side-by-side) in Fig. 5 and in the first platform configuration (camera platforms 540L, 540R back-to-back) in Figs. 6, 7 and 8. In other respects, the imaging apparatus 500 of Figs. 5 and 6 can be identical to the imaging apparatus 500 shown in Figs. 1-3, i.e., without the handle 510 and the various features related to the handle 510.

As illustrated in Figs. 7 and 8, an imaging apparatus 500 without an elongated handle 510 (e.g., the one illustrated in Figs. 5 and 6) can be connected electronically to an external device 700, e.g., a smartphone, such that a smartphone 700, which comprises a touchscreen 701, can replace mode and status indicators 578, and user controls 506, lens release button 507, shutter button 509 by showing mode and status indicators, and by providing user controls. The smartphone can also be used to receive user inputs and commands, and to transmit control instructions to the imaging apparatus 500. As illustrated in Fig. 7, the imaging apparatus 500 and smartphone 700 can be electronically connected by communications connection 476, which can be wired or wireless or a combination thereof.

As illustrated in Fig. 8, the imaging apparatus 500 and smartphone 700 can be both mechanically (rigidly) and electronically connected by an attachment arrangement 549 so that the smartphone also performs the mechanical function of rigidly supporting the imaging apparatus 500. In some embodiments in which an elongated handle portion 510 is provided, the elongated handle portion 510 can comprise a cellphone 700. Both communications connection 476 and attachment arrangement 549 can be selected so as to be compatible with a power and communications attachment feature of the smartphone 700, such as, e.g., USB (universal serial bus) or other such attachment feature as may be used in the industry. It should be noted that the orientation of the imaging apparatus 500 with respect to the cellphone 700 in Fig. 8 is a non-limiting example for purposes of illustration, and in other examples of the invention the imaging apparatus 500 can be rotated 90° clockwise or counter-clockwise relative to the illustrated example.

Fig. 9 illustrates schematically a variety of angles-of-view of ultra-wide-angle cameras 520. An angle-of-view larger than 200° can be advantageous, for example, for stitching together images for a panorama with the design as illustrated, especially where relatively thick cameras are disposed back-to-back in the first platform configuration.

Fig. 10A shows an exemplary embodiment wherein an external display screen 480 is provided for viewing still or video images‘live’ and/or for reviewing still or video images already captured and, for example, stored in external storage medium 470. The external display screen 480 and external storage medium 470 can be coupled to the imaging apparatus 500, for example to ports in the port cavity 505 or in the bottom 511 of the handle, using respective high-speed transmission arrangements 485, 475 which can be wired or wireless, depending on the specific design.

Fig. 10B shows another exemplary embodiment in which 3D, or stereoscopic, glasses are provided for use with the imaging apparatus 500. Since external devices such as display screens 480 are commonly available in homes and offices, it can be advantageous to combine the imaging apparatus 500 with 3D glasses in order to view images acquired by the imaging apparatus without the need for accessing additional equipment beyond what is normally found in homes and offices.

Discussion of Calibration targets and related methods

As discussed earlier in this disclosure, it can be desirable to calibrate (or re calibrate) the cameras 520 of a camera apparatus 500 in order to improve the complex images created by combining pairs of acquired images. As is known in the art, it is common practice to use a calibration target with clearly identifiable geometric features, and/or with highly contrasting adjacent areas. A calibration-target image acquired by a camera can be used to calculate or update calibration data, including intrinsic and extrinsic parameters.

Fig 11 shows four different prior art calibration target pattern examples. This is not an exhaustive representation of calibration target patterns but is reproduced to show the types of features that can be included in calibration targets.

We now refer to Fig. 12, which shows a left-rear perspective view and a front projection view of a camera apparatus 500 in a back-to-back configuration. An upper portion of the elongated handle 510, on each of two opposing faces of the handle 510, is adapted to form sloping sections 580L, 580R. The sloping sections 580L, 580R, are near the end of the handle 510 that is the end closer to the pivot assembly 501. Calibration targets 550L, 550R are provided on the sloping sections 580. One non-limiting reason for placing the calibration targets 550 on the sloping sections 580 is to improve the viewability of the calibration targets 550L, 550R by respective cameras 520L, 520R. The improvement in viewability is accomplished by (a) presenting the targets 550 to the cameras 520 at a less oblique angle than if the targets 550 were anywhere else on the handle 510, especially if placed farther away on the long, flat part of the handle 510, and (b) having the targets 550 take up a larger proportion of the angle of view of each of the cameras 520 than if elsewhere lower down on the handle 501. In other embodiments (not shown) the sloping section 580 can be more horizontal and in fact can be a shelf that is perpendicular to the handle 510. The placement of‘close’, i.e., close to the pivot assembly, calibration targets 550 is particularly advantageous for use when the camera apparatus 500 is in the back-to-back configuration. However, such targets would not be usable by a camera apparatus 500 in the side-by-side configuration, as can be seen, for example in Fig. 7-B, where the calibration target 550 is not viewable by cameras 520 in the side-by-side configuration. Even if the angle of coverage of the ultra-wide-angle cameras 520 were wide enough, the camera platforms 540 according to the illustrated embodiments would block the close calibration targets 550 on the sloping portions 580 from the perspective of the camera 520.

It should be noted that the specific method of providing calibration targets on a handle 510 of the camera apparatus is not important and any method may be used. For example, a calibration target can be printed directly on the handle or attached as a sticker, or can be molded or etched into the material of the handle, or can be glued on, heat- welded, or fastened or affixed in any way that makes the calibration target a fixed part of the handle or as a replaceable part of the handle, whether that makes the target an integral portion of the handle or a temporary add-on.

Referring now to Fig. 13, a‘far’ calibration target 551 is provided on the distal end of the elongated handle 510 for use by the cameras 520 when in the side-by-side configuration. As shown in Fig. 14, it is possible - and, in some embodiments - desirable to provide both close calibration targets 550 and a far calibration target 551 on a single camera apparatus 500 so that at least one of the provided calibration targets is always viewable by each camera 520 in either of the back-to-back or side-by-side configurations.

Fig. 15 illustrates the use of a far calibration target 552 in a back-to-back configuration. In the illustrated example, far calibration target 552 is a‘wraparound’ target visible on all sides and thus viewable by the cameras 520 in both the back-to-back ad side-by-side configurations. For example, for manufacturing reasons or aesthetic reasons it may be desirable to have a wraparound target. Different portions of such a wraparound target would be operative to function as a calibration target, depending on the configuration. In another example, far calibration target 552 isn’t completely wraparound, and is provided in discrete non-contiguous sections.

Fig. 16 illustrates an alternative embodiment in which far calibration targets 553L, 553R are provided on respective‘drop-down’ portions 513L, 513R of the handle 510 so as to present each calibration target 553 at a less oblique angle to each respective camera 520 than, for example, the target 552 of Fig. 8. Fig. 17 illustrates yet another alternative embodiment in which far calibration targets 554 are provided on‘pull out’ portions 514 of the handle 510, also so as to present the calibration targets 554 at a less oblique angle to the respective cameras 520. Thus, in some embodiments, a calibration target can be extended or extensible from the handle itself.

In some embodiments, a calibration target may be provided on a support stand or support accessory of a camera apparatus. For example, a tripod is a three-legged support stand often used to support cameras. Referring now to Fig. 18, a camera apparatus 500 is illustrated as being supported by a support stand 490, which in this non-limiting example is a tripod, but can be any type of support accessory for stabilizing and supporting the camera apparatus 500. Each of the tripod legs 491 is provided with a calibration target portion 555. At least one of the calibration targets 555 is viewable by each of the two ultra-wide-angle cameras 520 in both back-to-back and side-by-side configurations. Fig. 12 shows an example of another support stand 490, this one with highly contrasting portions in each of the legs 491, making the addition of calibration targets unnecessary - the highly contrasting legs 491 can serve as calibration targets as well.

We refer back to Fig. 9 which shows (A) an oblique left-side view of a camera apparatus 500 in a side-by-side configuration. Angle-of-view lines have been added in order to emulate, in cross-section, the extent to which an ultra-wide-angle camera 520 can‘see’, i.e., image, a calibration target on a handle. This issue is particularly relevant to the design illustrated wherein the two cameras 520, when in side-by-side configuration, face‘forward’ and away from the handle 510, and the location on the handle where a calibration target might be provided is‘below the horizon’ for the cameras 520. For example, the first (left-most) angle-of-view line marked a=180° represents an angle-of- view for an ultra-wide-angle camera with an angle-of-coverage of 180°. It is likely that such a camera would be unable to acquire calibration-target images when in side-by-side configuration if an onboard calibration target were to be located on the handle 510 of a camera apparatus 500 as illustrated, for example, in Fig. 13. The same can be seen to be true of the second angle-of-view line marked a=190°, although in this case if the handle were to be further elongated, it would eventually be possible to see a calibration target located on the handle with an angle-of-view (a) of 190°. Nonetheless, for a handle of a reasonable size, e.g., the size of a user’s hand, combined with a general design concept employed in the illustrated examples wherein the pivot axis of the cameras is displaced (in the direction of the scene to be imaged) from the central axis of the handle, an ultra- wide-angle camera with an angle-of-view of 190° could be largely ineffective in terms of acquiring calibration-target images from an onboard calibration target. The angle-of-view lines corresponding to 200° or 210° show that cameras with a of greater than 200° or greater than or equal to 210° would be capable of viewing onboard calibration targets on the handle and acquire calibration-target images thereof, and therefore be suitable for use in a camera apparatus of the present disclosure.

Fig. 9 also shows (B) an oblique front view of the camera apparatus 500 in a back-tO-back configuration, with angle-of-view lines similar to those in part A of Fig. 9. As can be seen in the drawing, sloping portion 580 intercepts a larger range of angle-of- view lines (i.e., most of the range of a=180° to a=210°) than does any part of the surface of the elongated portion of the handle 510, and therefore can be more suitable for ‘presenting’ a calibration target 550 (shown in Figs. 5 and 7) to the cameras 520.

Fig. 20 shows a block diagram of a camera apparatus 500 and related

components. The mechanical components of the pivot assembly 501, handle 510 and docking frame 570 have been discussed with reference to the preceding figures, as has external camera support 490. The function of stitching module 600, stereo image synthesis module 610, video-display-controller 620 and configuration sensor 625, as well as display device 480 and storage medium 470, are discussed elsewhere in this specification in connection to embodiments elating to their deployment and/or use.

Referring now to Figs. 21 and 22, two different embodiments for displaying acquired images are shown, i.e., images acquired by cameras 520. Fig. 21 shows an embodiment wherein the rear of each camera platform 540 is provided with a display screen 545, which for example can be a liquid crystal display (LCD) screen. Screens 545L, 545R can be used to watch still or video images as they are acquired by the camera apparatus 500 in side-by-side mode (i.e., for stereoscopic images), or to review still and/or video images acquired in either configuration. A video-display-controller 620 (not shown in Fig. 21) can be provided for controlling the display of images on the onboard screens 545. Fig. 22 shows an embodiment wherein an external display screen 480 is provided for viewing still or video images‘live’ and/or for reviewing still or video images already captured and, for example, stored in external storage medium 470. The external display screen 480 and external storage medium 470 can be coupled to the imaging apparatus 500, for example to ports in the port cavity 505 or in the bottom 511 of the handle, using respective high-speed transmission arrangements 485, 475 which can be wired or wireless, depending on the specific design.

In embodiments, camera calibration, and more specifically re-calibration, is often necessary in order to correct, by software means, the effects of a mechanical

misalignment of one or both of the two cameras 520 of a dual-mode camera apparatus 500. As described earlier, the two cameras 520 can be manipulated back-and-forth so as to switch between a 360° panoramic imaging mode in the back-to-back configuration and a 3D stereoscopic imaging mode in the side-by-side configuration.

The internal workings of a digital camera can include a planar array of photodetectors. Such a planar array can define a respective photodetector plane for each camera. This is illustrated in Fig. 23. A camera apparatus 500 is shown in (A) left-rear perspective view and (B) oblique front projection view. Together with each view, a representation of dimensional axes (x-, v- and z- axes) is shown for reference. The photodetector plane defined by the planar array of photodetectors of each one of the cameras 520R, 520L is represented schematically by planes PR and PL, respectively. As can be seen more clearly in the front view planes PR and PL are optimally parallel to each in the back-to-back configuration. The design of the docking frame 570 and its components can be selected so as to contribute to maintaining the parallel disposition of the two photodetector planes PR and PL. For example, wherein the vertical movement of the camera platforms 540L, 540R is preferably restricted by the fact that they are inside the docking frame 570 which restricts their vertical movement to no more than one or two millimeters.

In the side-by-side configuration, illustrated in Fig. 24, the two photodetector planes PR and PL are optimally not only parallel but also coplanar. Fig. 18 also indicates and illustrates the three Euler angles through which either or both of the photodetector planes PR and Pi can rotate and thereby cause cameras 520R and/or 520L to become misaligned in terms of pitch, roll and yaw. While pitch, roll and yaw are illustrated with an example of a camera apparatus 500 in the side-by-side configuration, it should be obvious that rotation through the same Euler angles is equally applicable in the back-to- back configuration, and even in spite of the presence of the docking frame 570.

In addition to rotation about one or more of the Euler angles, a camera can also become laterally displaced, or translated, from its factory-specified position. Either rotation or translation can occur for any one of a number of reasons, such as, for example, manufacturing error or simple wear of pivoting arrangements. When a calibration procedure is performed, deviations of rotation and translation are measured as extrinsic calibration parameters with respect to real-world coordinates, which in the instant disclosure are generally represented by a calibration target. Calibration data can be compared to previous values that can be stored, for example, in onboard non-transitory storage in a camera apparatus. The previous values can be factory- specified values, or can be the values measured or calculated during a previous calibration procedure.

General note: In the embodiments disclosed herein any ultra- wide-angle camera 520 can capture images with an angle-of-view greater than 180°, or greater than 190°, or greater than 200°, or greater than 210°, and the stitching of images acquired in the back- to-back mode can generally be performed so as to create 360° panoramic images. Merely for the sake of convenience and in order to facilitate the illustrating of the principles of the embodiments in the accompanying figures, the‘acquired images’ shown in the figures, beginning with Fig. 25, are shown with substantially smaller angles-of view than can be possible with the disclosed camera apparatuses, and are uniformly represented as rectangular, although they could alternatively be hemispherical images in any of the disclosed embodiments. To be clear, the foregoing explanation applies to all of the ‘acquired images’ illustrated, regardless of the configuration, i.e., side-by-side or back-to- back, in which they are acquired.

Fig. 25 shows a prior art example of stitching together two images 15, 16 acquired by a camera apparatus 500 in the back-to-back configuration. If the images 15, 16 are acquired by a camera apparatus 500 that is perfectly calibrated, then following a stitching process carried out, for example, by stitching module 600, a stitched panoramic image 20 includes a stitching overlap 17 that exhibits no sign of misalignment.

Fig. 26 shows a prior art example of combining two images 25, 26 acquired by a camera apparatus 500 in the back-to-back configuration. If the images 25, 26 are acquired by a camera apparatus 500 that is perfectly calibrated, then following a process of synthesizing a stereo image carried out, for example, by stereo image synthesis module 610, a synthesized stereo image 30, when viewed, for example, through 3D glasses as perceived image 31, includes a stereo overlap 27 that exhibits no sign of misalignment.

As alluded to earlier, a camera apparatus 500 may not always be in perfect alignment, and one of both of the cameras 520 may not be in its factory-specified position and attitude. As a result, photodetector planes PR and Pi of Figs. 17 and 18 may be rotated and/or translated from their pristine, parallel relative disposition.

Referring to Fig. 27, examples are shown of images acquired in the back-to-back configuration with a misaligned camera 520; specifically, in each of the examples of Fig. 21, the misaligned camera is right-side camera 520R. In Fig. 27-A, the result of‘pitch’ rotation of camera 520R can clearly be seen in right image 16, which shows a lower portion of the scene than left-side image 15, acquired by left-side camera 520L, and also lower in comparison to the‘correct’ version of right image 16 in Fig. 25. In Fig. 27-B, the result of‘roll’ rotation of camera 520R can clearly be seen in right image 16, which shows an image seemingly rotated counterclockwise with respect to the scene appearing in left-side image 15, acquired by left-side camera 520L as well as with respect to the ‘correct’ version of right image 16 in Fig. 19. In Fig. 27-C, the result of‘yaw’ rotation of camera 520R can clearly be seen in right image 16, which shows a portion of the scene that is a bit too far to the left (the rightmost palm tree is cut off) in comparison to the ‘correct’ version of image 15 shown in Fig. 25.

In Fig. 28, examples are shown of stereo images 30 synthesized from misaligned images 25, 26. For purposes of illustration only, all of the alignment issues in the examples shown in Fig. 28 stem from uncalibrated rotation error in the left camera 520L. In each of the examples, the misalignment can be easily detected by a user, based on different heights in the two original images (pitch angle error, Fig. 28-A), objects in the scene being presented at different angles (roll angle error, Fig. 28-B) and failure to achieve a desired degree of left-right separation (yaw angle error, Fig. 28-C) which can also be seen as parallax error and insufficient stereo overlap (or, in other cases when the yaw angle error is in the opposite direction, excessive stereo overlap).

A user viewing a panoramic image 20 with, for example, alignment-related stitching error or a stereo image 30 with, for example, parallax error, can be given an opportunity to manually initiate the performance of a calibration.

In a non-limiting example illustrated in Fig. 29, the stitching module 600 of the camera apparatus 500 can stitch two acquired images 15, 16 so as to create a panoramic image 20, and the video-display-controller 620 can cause the stitched panoramic image 20 to be displayed, to a user, on an onboard display screen 545 or external display device 480. Stitching two images to create a panorama is well known in the art and there is no reason to describe the specifics of the process in this disclosure. In the Fig. 29 example the user can view the stitched panoramic image 20 and determine, for example, that the visible artifacts of a stitching error, possibly from faulty alignment of the two onboard cameras 520, render the stitched image 20 unsatisfactory. Of course, there can be additional or alternative causes or visible artifacts of misalignment. In the example of Fig. 29-A, the misalignment has been caused by pitch angle error in the right-side camera 520R. The user, whether prompted or not, depending on the design of the camera apparatus 500, can manually initiate a calibration procedure by the stitching module 600, for example by providing a user input via a button or virtual screen button or prompt. The stitching module 600 can perform a re-calibration (in which the stitching module 600 acquires one or more calibration-target images and recalculates calibration data for one or both of the cameras 520) and the video-display controller 620 can cause a re-stitched panoramic image 20 using the updated calibration parameters to be displayed. The video- display-controller 620 can include a toggling display mode that permits the user to toggle back-and-forth between the two‘versions’ of the stitched panoramic image 20 - e. g·,‘A’ and‘B’ in Fig. 29 - and select the preferred version. The post-calibration version of the stitched panoramic image 20 in Fig. 23-B shows reduced stitching error following the manually-initiated calibration, versus the pre-calibration version of Fig. 23 -A, and the user may therefore choose to (a) retain the post-calibration version, and/or (b) instruct the camera apparatus 500 whether to retain the updated calibration parameters for subsequent image acquisition.

In a non-limiting example illustrated in Fig. 30, the stereo image synthesis module 610 of the camera apparatus 500 can combine two acquired images 25, 26 so as to synthesize a stereo image 30, and the video-display-controller 620 can cause the synthesized stereo image 30 to be displayed, to a user, on an onboard display screen 545 or external display device 480. The user may choose to use 3D glasses if the display is not autostereoscopic. Combining two images to create a 3D stereoscopic image is well known in the art and there is no reason to describe the specifics of the process in this disclosure. In the Fig. 30 example the user can view the synthesized stereo image 30 and determine that the visible artifacts of, for example, parallax error, possibly from faulty alignment of the two onboard cameras 520, render the stereo image 30 unsatisfactory. Of course, there can be additional or alternative causes or visible artifacts of misalignment.

In the example of Fig. 30-A, the misalignment has been caused by yaw angle error in the left-side camera 520L. The user, whether prompted or not, depending on the design of the camera apparatus 500, can manually initiate a calibration procedure by the stereo synthesis module 610, for example by providing a user input via a button or virtual screen button or prompt. The stereo synthesis module 610 can perform a re-calibration (in which the stereo image synthesis module 610 can acquire one or more calibration-target images and recalculates calibration data for one or both of the cameras 520) and the video display controller 620 can cause a re-synthesized stereo image 30 using the updated calibration parameters to be displayed. The video-display-controller 620 can include a toggling display mode that permits the user to toggle back-and-forth between the two ‘versions’ of the synthesized stereo image 30 - e.g.,‘A’ and‘B’ in Fig. 30 - and select the preferred version. Referring back to Fig. 30, the post-calibration version of the synthesized stereo image 30 in Fig. 30-B shows reduced parallax error - and increased stereo overlap - following the manually-initiated calibration, versus the pre-calibration version of Fig. 30-A, and the user may therefore choose to (a) retain the post-calibration version, and/or (b) instruct the camera apparatus 500 whether to retain the updated calibration parameters for subsequent image acquisition.

Referring now to Fig. 31, a top-level flowchart of a method is shown, for stitching images 15, 16 acquired by a camera apparatus 500 in the back-to-back configuration to create a panoramic image 20. The method comprises: Step Sll, computing an updated image-alignment calibration data for each of the two cameras 520. Step SOI can comprise two sub-steps:

Sub-step Sll-2 acquiring calibration-target images of a calibration target by each of the two cameras 520L, 520R, and

Sub-step Sll-2 calculating updated rotation and translation data for each respective photodetector plane P L , P R relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target.

Step S12 stitching the two images 15, 16 to form a 360° panoramic image 20, using the updated image- alignment calibration data.

A calibration target can be provided onboard the camera apparatus or on a support apparatus, as discussed in the various embodiments.

In some embodiments, Sub-step Sll-2 includes acquiring calibration target images of the calibration target while in the back-to-back configuration. In other embodiments, acquisition of calibration-target images in the back-to-back configuration is not possible, for example because an appropriate target is blocked by a user’s hand or has fallen off the camera apparatus or for any other reason; in such a case the camera apparatus can perform Sub-Step Sll-2 by calculating rotation and translation data from previously stored data, or from calibration data acquired when the camera apparatus 500 was in the side-by-side configuration. In some cases, only one calibration-target image can be acquired, i.e., by one of the two onboard cameras, and then it may be possible to calculate calibration data (e.g., rotation and translation parameters) based on any combination of the single calibration-target image, previously stored data, and calibration data calculated from calibration-target images acquired when the camera apparatus was in the side-by-side configuration. Referring now to Fig. 32, a top-level flowchart of a method is shown, for synthesizing a stereo image 30 from images 25, 26 acquired by a camera apparatus 500 in the side-by-side configuration. The method comprises:

Step S21, computing an updated image-alignment calibration data for each of the two cameras 520. Step SOI can comprise two sub-steps:

Sub-step S21-2 acquiring calibration-target images of a calibration target by each of the two cameras 520L, 520R, and

Sub-step S21-2 calculating updated rotation and translation data for each respective photodetector plane P L , P R relative to the other, by analyzing the acquired calibration-target images of the onboard calibration target.

Step S22 synthesizing a stereo image 30 from the two images, using the updated image-alignment calibration data.

For this method, a calibration target can be provided onboard the camera apparatus or on a support apparatus, as discussed in the various embodiments. Sub-step S21-2 includes acquiring calibration target images of the calibration target while in the side-by-side configuration.

For either of the above methods, a calibration may be initiated automatically because a camera apparatus has reached a milestone, for example a counting milestone such as number of images acquired since the previous calibration, number of times the two camera platforms 540 have been manipulated between the two configurations, time since the previous calibration, etc. In some embodiments, an automatic calibration can be delayed if a condition is not met, for example that the calibration target not be blocked, or that there be sufficient battery power.

In some embodiments, a user may can decide to manually initiate a calibration procedure.

Referring to Fig. 33, a top-level flowchart illustrates manually- initiated calibration method for panoramic images, which can include the following steps:

Step S31 causing a first stitched panoramic image 20 to be displayed on an external or onboard display device.

Step S32 receiving user input manually initiating a computing of updated image- alignment calibration data for each of the two cameras 520.

Step S33 performing a user-initiated computing in response to the user input of Step S32, in accordance with calibration methods described herein.

Step S34 causing a second stitched panoramic image 20 to be displayed, using the updated image-alignment calibration data.

Step S35 provide a toggling display mode for toggling back-and-forth between first and second panoramic images. Not all of the steps are necessarily carried out in the performance of the method.

In Fig. 34, a top-level flowchart illustrates manually-initiated calibration method for stereo images, which can include the following steps:

Step S41 causing a first stereo image 30 to be displayed on an external or onboard display device. The display can be autostereoscopic or not, which could require, for example, 3D glasses to be worn by a user.

Step S42 receiving user input manually initiating a computing of updated image- alignment calibration data for each of the two cameras 520.

Step S43 performing a user-initiated computing in response to the user input of Step S42, in accordance with calibration methods described herein. Step S44 causing a second stitched panoramic image 20 to be displayed, using the updated image-alignment calibration data.

Step S45 provide a toggling display mode for toggling back-and-forth between first and second panoramic images.

Not all of the steps are necessarily carried out in the performance of the method. Discussion of asymmetric cropping (Figs. 35A-45) Embodiments of a dual mode (stereoscopic mode and panoramic mode) camera apparatus that feature asymmetric cropping are disclosed herein. A suitable camera apparatus, for example, is the camera-loaded platform assembly 188 shown in Figs. 35A and 35B. Figs. 35A and 35B schematically illustrate an idealized camera-loaded platform assembly 188 comprising camera platform assembly 168 and first and second cameras 120A, 120B, each of which in preferred embodiments is equipped with an ultra-wide- angle lens (e.g., fisheye or rectilinear) and which therefore can be referred to as an‘ultra- wide-angle camera’. The cameras 120A, 120B are installed on respective camera platforms 140A, 140B which are joined by pivot member 144. Fig. 35A shows the side- by-side configuration and Fig. 35B shows the back-to-back configuration of the same camera-loaded platform assembly 188.

Each camera 120A, 120B is provided for the purpose of acquiring digital images and is capable of acquiring digital images, which can be still images or video images, using any of the digital image sensing technologies known in the art (including, but not exclusively, CCD, CMOS, NMOS, and live MOS technologies). The camera apparatus further comprises a multi-configuration housing assembly, which can be, for example, any platform assembly 186 discussed with respect to the various figures.

Referring to Fig. 36, a camera-loaded platform 188 is shown in side-by-side configuration, comprising a platform assembly 186 in the side-by-side configuration and two ultra-wide-angle cameras 120A, 120B. In the side-by-side configuration as illustrated in Fig. 36, the two cameras 120A, 120B are laterally displaced from one another. The optical axes 203A, 203B respectively of cameras 120A, 120B define an intermediating region of space 300 between the optical axes, and respective exterior regions space 301A, 301B. As indicated by the parallel nature of the optical axes 203A, 203B respectively of cameras 120A, 120B, the cameras are substantially co-oriented.

As ultra-wide-angle cameras, the two cameras 120A, 120B have angles-of-view of more than 180°. Typical respective angles of view 205A, 205B for cameras 120A, 120B are illustrated in Fig. 37, which simulates what the angles-of-view would be if platform assembly 186 did not exist, i.e., the original angles-of-coverage of the two cameras 120A, 120B. Back in Fig. 36, the platform assembly 186 exists, and so the angle-of-view is‘cut off’ by the platform assembly. Line 209 A shows the restriction to the angle-of-view of camera 120A due to the presence of the platform assembly 186, and line 209B shows the restriction to the angle-of-view of camera 120B that happens due to the presence of the platform assembly 186. This point is further illustrated in Fig. 38, where the shaded area represents an‘operative’ angle-of-view of camera 120B, which is the camera on the right. Further to the right of the camera 120B, beyond the extent of the platform assembly in the camera’s‘local’ exterior region-of-space 301B, the original extent of the angle-of-view 205B is available, i.e., unblocked. To the left of Fig. 38, with the angle-of-view effectively blocked by the camera platform 186, the extent of the angle-of-view is limited to line 209B throughout much of the intermediating region 300 and throughout the‘remote’ exterior region 301A. Obviously a reversed version of the drawing in Fig. 38 would show the operative angle-of-view for the left-side camera 120A similarly limited by the presence of the camera platform 186.

Image-processing circuitry (e.g., 282 in Fig. 44) provides a 180°+ asymmetric cropping mode in the side-by-side configuration in which each image is cropped asymmetrically while leaving an effective (post-cropping) angle-of-view greater than or equal to 180° for each of the two images acquired by the two cameras 120A, 120B.

In some embodiments, the ultra-wide-angle cameras 120A, 120B include lenses with coverage angles capable of producing hemispherical images, i.e., a circular image produced by capturing/acquiring an image of a hemispherical scene, or larger-than- hemispherical scene. In some embodiments, angles-of-view are limited by the image sensor of each camera. It is to be understood that Figs. 36, 37 and 38 and so on are merely two-dimensional elevation-view cross-sectional representations of angles-of- view, provided to illustrate some embodiments. Asymmetrical cropping may take place at any part of a hemispherical image (or rectilinear image created using a lens capable of hemispherical angles of coverage) as illustrated in Fig. 39.

In some embodiments the cameras 120A, 120B are flush with the upper surface of the camera platform 168. (The word“upper” is used here only for convenience in accordance with how the elements are drawn in the figures, and it should be taken to mean the surface upon which the cameras are mounted.) In other embodiments, the cameras 120A, 120B or components thereof (such as lenses and/or lens mounts) protrude from the upper surface of the camera platform 168. In such embodiments, it can be desirable to implement asymmetric cropping to exclude not only the camera platform 168 but also the‘other’ camera 120A or 120B as well. Fig. 40 is similar to Fig. 37, but adds a dashed line showing a cropping angle, along line indicating the extent to which the effective angle-of-view of camera 120B is blocked by the camera platform 186 and the ‘other’ camera 120A. Selecting an effective post-cropping angle-of-view by the camera apparatus operating in its 180°+ asymmetric cropping mode - the angle between original ‘angle-of-view’ line 205B and the‘cropping angle’ dashed line - thus excludes the camera platform and camera and leaves an effective post-cropping angle-of-view of greater than 180°.

Figs. 38 and 40 are all drawn so as to show the asymmetric cropping of an image acquired by right-side (in the figures) camera 120B. Thus, the effective after-cropping angle-of-view between line 205B and the‘cropping angle’ dashed line excludes both the camera platform 168 and the other camera 120A from the cropped image, and the cropping again and leaves an effective post-cropping angle-of-view of greater than 180°.

The extent of cropping on the right side of Fig. 40, in the‘local’ exterior region 301B of camera 120B, is essentially zero, i.e., no cropping is done in section of an image acquired by the‘right-side’ camera 120B that captures the portion of the scene within local exterior region-of-space 301B. On the other hand, the extent of cropping in the intermediating region-of-space 300 is clearly greater than in the local exterior region-of- space 301B in both figures. In addition, the extent of cropping in the‘remote’ exterior region-of-space 301A is greater than in the local exterior region-of-space 301B in both figures. This is evidenced not only from visual inspection of the figures, but also from the fact that the uncropped image acquired by camera 120B would have had an original angle-of-view out to line 205B in both directions in the two-dimensional representations of the figures (See Fig. 37), and the‘cropping angle’ of Fig. 40 reduces this angle-of- view to exclude the camera platform 186 and the respective‘other’ camera 120A and therefore to become a smaller angle-of-view. Thus, the asymmetric cropping disclosed herein, including that accomplished by the two cameras 120A, 120B using the 180°+ asymmetric cropping mode, is based at least in part upon the concept of the extent of cropping in the respective remote exterior region-of-space (e.g., 301A for camera 120B) as well as in the intermediating region-of-space 300 will always be greater than the extent of cropping in the respective local exterior region-of-space (e.g., 301B for camera 120B).

The example of Fig. 40 relates to asymmetric cropping that accounts for the relative position of hardware such as the camera platform 186 or the respective‘other’ camera (120A or 120B). This example is not meant to imply any limitation of asymmetric cropping to this specific example. Asymmetric cropping may be performed in the 180°+ asymmetric cropping mode to any extent that permits an after-cropping angle-of-view of at least 180°. The selection of the extent of the cropping (or the selection of the effective post-cropping angle-of-view) can be based on, to name a few additional non-limiting examples: on physical or optical characteristics of the respective lenses of the cameras 120A, 120B, or on the specifications of a display device or storage medium, or general aesthetic reasons. In some embodiments, the 180°+ asymmetric cropping mode operative to crop images to an effective after-cropping angle-of-view of exactly 180°. In some embodiments, cropping is performed at two or more locations in an image, as illustrated two-dimensionally in example C in Fig. 39, or continuously or nearly continuously around the periphery of an image; in such embodiments it can be desirable to adhere to some of the other features and cropping conditions discussed above, for example that the extent of cropping through the intermediating and remote regions-of-space are both greater than the extent of cropping through the local region-of- space for each camera, or for example that the camera platform and/or respective other camera is cropped out of the image.

The above discussion with respect to Fig. 40 relates only to the cropping of images acquired by camera 120B but it should be obvious that the same embodiments would apply in equal measure to camera 120A, with the drawings reversed left-right. Fig. 41 illustrates the cropping of two images - a first image acquired by camera 120A and a second image acquired by camera 120B. The first image is cropped (in this two- dimensional representation) at the dashed/dotted line marked‘cropping angle (120A)’, and thus the effective after-asymmetric-cropping angle-of-view of the first image is from the cropping angle (120 A) dashed/dotted line to the original boundary of the angle-of- view of camera 120A, which is line 205A. The second image is cropped (as in Fig. 40) at the dashed line marked‘cropping angle (120B)’, and thus the effective after-asymmetric- cropping angle-of-view of the first image is from the cropping angle (120B) dashed line to the original boundary of the angle-of-view of camera 120B, which is line 205B. Both effective after-cropping angles-of-view are greater than 180°, and in addition both effective after-cropping angles-of-view are quantitatively equal, although of course reversed in space (mirror images of each other).

Referring now to Figs 42 and 43, an area of stereo overlap is shown according to various embodiments, for a camera apparatus of any of the preceding embodiments. Fig. 42 (based on Fig. 37) shows the uncropped stereo overlap - the intersection of the two angles-of-coverage of cameras 120A, 120B without any restrictions (e.g., camera platform 186). Any part of an imaged scene within the stereo overlap region appears in both left- and right-side images; the more of a scene that is within the stereo overlap region, the greater the stereoscopic effect of a complex (stereoscopic) image formed from the two images. In the example shown in Fig. 42, the stereo overlap region encompasses a more than 180° angle-of-view.

Fig. 43 shows a‘post-crop’ stereo overlap, i.e., the angle of the stereo overlap that would result from cropping both images along the‘cropping angle’ dashed line of Fig. 40 with respect to an image acquired by camera 120B (and its mirror-image with respect to an image acquired by camera 120A), for example in order to avoid including both the camera platform 186 and the respective‘other’ camera in either of the cropped images. In the example shown in Fig. 43, the stereo overlap encompasses an angle-of- view less than 180°, but still encompasses a majority of the original uncropped stereo overlap region of Fig. 42. Moreover, the stereo overlap portion comprises a majority of each of the cropped images.

Fig. 44 is a block diagram of a camera apparatus 400. After carrying out a cropping operation using the 180°+ asymmetric cropping mode using the image- processing circuitry 282 of the camera apparatus 400 and combining the two cropped images into a complex (panoramic or stereoscopic) image, the complex image can be displayed by a display device 360 (onboard or in electronic communication with the camera apparatus), using a video -display controller 288, and/or stored in a non-transitory computer-readable storage medium by executing software 365. The software 365 can be executed by the image-processing circuitry 282, or alternatively by an additional processor (or processors) in a separate accessory (not shown) of the camera apparatus 400 such as a docking terminal or other external user terminal (e.g., external user terminal 360). Storing a stereoscopic image should be done in a format readable by a 3D image display device. Display of a stereoscopic image can be done using any of the 3D display technologies available, such as, for example (but not exhaustively) displaying anaglyph images, displaying with polarizing filters, and displaying with alternate-frame sequencing.

A method of forming complex images from images acquired using a camera apparatus (e.g., imaging apparatus 400) comprising two camera platforms 140A, 140B and two ultra-wide-angle cameras 120A, 120B respectively installed thereupon is disclosed. The two camera platforms 140A, 140B are preferably pivotably connected to each other to form a camera platform 168 which is operable to be user-manipulated alternatively to a substantially co-planar side-by-side configuration or to a back-to-back configuration, the complex images being user-selectable from a group consisting of panoramic and stereoscopic images. The method is illustrated in the flow chart in Fig. 45.

Embodiments of the present invention relate to a solution for a problem described below with reference to Fig. 47.

Figs. 46A-46C illustrate first 708A and second 708B platform assemblies. In Fig. 46A, the platform assemblies 708A, 708B, are disengaged from each other. In Figs. 46B- 46C, the platform assemblies 708A, 708B are engaged (e.g. attached - for example, rigidly attached) to each other.

First platform assembly 708A comprises two cameras 790A, 790B that are each mounted onto platform 716A and laterally spaced from each other along a first lateral displacement axis 714A by a displacement distance lar_disr,,// a · / , for example an interpupillary distance. The optical axis of each of the two cameras 790A, 790B is normal to the first lateral displacement axis 714A. In the non-limiting example of Figs. 46A- 46C, each module (i.e. platform assemblies 708A, 708B thereof) is foldable. This is not a limitation - in the example of Figs. 46F-46G shows an example that is not required to be foldable.

Second platform assembly 708B comprises two cameras 790C, 790D that are each mounted onto platform 716B and laterally spaced from each other along a second lateral displacement axis 714B by a displacement distance lat_dist offs t for example an interpupillary distance. The optical axis of each of the two cameras 790C, 790D is normal to the second lateral displacement axis 714B

The two lateral displacement axes 714A, 714B are parallel to each other. In the top of Fig. 46A, each foldable platform assembly 708A, 708B is in side-by- side configuration and may each individually function to acquire ~ 180° stereoscopic image data. Thus, in the top of Fig. 46A, foldable platform 716A, 716B is in the unfolded configuration - in this case, corresponding to a side-by-side (SBS)

configuration. At the bottom of Fig. 46A, each foldable platform assembly 708A, 708B is in back-to-back configuration and may each individually function to acquire -360° panoramic (e.g. 2D-panoramic) image data.

In Figs. 46B-46C, the cameras four 790A, 790B, 790C, 790D are disposed (e.g. rigidly disposed) so that camera 790A is laterally aligned with camera 790D, and camera 790B is laterally aligned with camera 790C. The optical axes of cameras 790A and 790D point away from each other; the optical axes of cameras 790B and 790C point away from each other.

As will be discussed below, in Fig. 46C, cameras 790A-790D are respectively labelled Ll, Rl, L2 and R2. Embodiments of the invention relate to producing a 360° panoramic fully stereoscopic image of a scene by from only four cameras when cameras 790A-790D are disposed as illustrated in Fig. 46C or Fig. 46E or Fig. 46H or the left- hand-side of Fig. 47 or Fig. 50B or Fig. 52C or in any F1-R1-F2-R2 configuration.

Figs. 46A-46C show one example of providing cameras (e.g. from first and second modules) so that the four cameras may be held (e.g. rigidly held to maintain relative position and orientation) in a F1-R1-F2-R2 configuration. Figs. 46D-46E show a second example of providing cameras (e.g. from first and second modules) so that the four cameras may be held (e.g. rigidly held to maintain relative position and orientation) in a L1-R1-L2-R2 configuration. Figs. 46F-46G show another example of providing cameras (e.g. from first and second modules) so that the four cameras may be held (e.g. rigidly held to maintain relative position and orientation) in a L1-R1-L2-R2

configuration. In the example of Fig. 46H, dist 2 offset, is larger - this may increase a magnitude of a stereoscopic effect at or near plus/minus 90 degrees (see Fig. 55F). For example, a spacer 798 may be provided for this purpose.

Problem-solution of Fig. 47 - it is known in the art that the fewer the cameras available, the more difficult is it to obtain a 360° fully-stereo panoramic image of the scene. In the event that only four cameras are available (e.g. to reduce cost), the best configuration would be that on the right and side of Fig. 47. However, due to other reasons (e.g. the need for each module of Figs. 46A-46H individual functionality for example, when disengaged from the other module), the configuration of the right hand side of Fig. 47 might not always be available. Conventional techniques are not capable of producing a 360° fully-stereo panoramic image from images of a scene acquired when the four cameras are in the L1-R1-L2-R2 configuration shown on the left hand side of Fig. 47 - this will be discussed below with reference to 54D-54E. Embodiments of the invention overcome this challenge to produce the 360° fully-stereo panoramic image of the scene from images acquired when the four cameras are (e.g. rigidly in) in the Ll-Rl- L2-R2 configuration (e.g. see the discussion with reference to Figs. 55E-55F).

NOTE about“360° fully-stereo panoramic image”— The skilled artisan will appreciate that not every 360° stereo panoramic image is fully stereo. For example, the image may be panoramic over 360 degrees and have a stereoscopic effect over most angles - see Fig. 54D-54D for such an example that is a not 360° fully-stereo panoramic image. Even if all locations over are 360°viewable and appear in left and right panoramic images, this is not enough for a 360° fully-stereo panoramic image.

In embodiments of the invention, to go from disengaged to disengaged an inter module docking assembly may be provided for holding the four cameras in rigid position and orientation relative to each other (e.g. to maintain a F1-R1-F2-R2 four-camera configuration). This may be include mechanical and/or magnetic and/or any other appropriate components. The inter-module docking assembly reside inter-module on at least in part either module - for example, this may occur in Figs. 46A-46E where the inter-module docking assembly is not shown. Alternatively or additionally, the inter module docking assembly may be provided at least in part as an additional object.

Fig. 48A shows the angle numbering convention used for Figs. 48A-55. Once a specific direction is defined as the‘forward direction’ 760, the forward direction is assigned an angle value of 0° . Starting from the angle value of 0° in the forward direction 760, the angle value increases monotonically in the clockwise direction until it reaches a value of as 180° - thus, (i) an angle value for the rightward direction 762 is defined as 90°; and (ii) an angle value for backward direction 764 is defined as 180°. Starting from the angle value of 0° in the forward direction 760, the angle value decreases monotonically in the counterclockwise direction until it reaches a value of as - 180° - thus, (i) an angle value for the leftward direction 766 is defined as -90°; and (ii) and the backward direction 764 may be considered to have a value of -180°. Because a circle has 360 degrees, the backward direction 764 may be assigned angular values of both 180° and -180°.

Reference is now made to Fig. 48B which shows four objects 770A-770D in a scene. In various examples discussed below , 1 st - 4 th objects 770A-770D are respectively disposed along the four axes which correspond to forward 760, rightward 762, backward 764 and leftward 766 directions.

Figs. 49A-49D illustrate a human reference viewer RV (having left 754 and right 758 eyes). In Figs. 49A-49D, the human viewer respectively faces in the forward 760, rightward 762, backward 764 and leftward 766 directions. In each of Figs. 49A-49D, the human viewer’s left eye 754 looks slightly to the right to view objects 770A-770D respectively. In each of Figs. 49A-49D, and the human viewer’s right eye 758 looks slightly to the left to view objects 770A-770D respectively.

Fig. 50A shows two human viewers 800, 802 - the human viewers are oriented in opposite directions. The images of the human viewers are distorted to exaggerate the distance between each viewer’s eyes 812A-812D. In the example of Fig. 5 A, the two viewers 800, 802 collectively have four eyes. The left eye 812A of forward-oriented viewer 800 may thus be referred to as a first left eye Ll ; right eye 812B of forward- oriented viewer 800 may thus be referred to as a first right eye R1 ; left eye 812C of backward-oriented viewer 802 may thus be referred to as a second left eye L2 ; right eye 812D of backward-oriented viewer 802 may thus be referred to as a second right eye R2.

Fig. 50B shows the multi-platform camera assembly of Figs. 46B-46C.

Comparing Figs. 50Aand 50B, it is clear that (i) the position of camera 790A in Fig. 50B corresponds to that of the left eye 812A of forward-oriented viewer 800 - thus, camera 790A may be labelled with LI ; (ii) the position of camera 790B in Fig. 50B

corresponds to that of the right eye 812B of forward-oriented viewer 800 - thus, camera 790B may be labelled with R1 ; (iii) the position of camera 790C in Fig. 50B corresponds to that of the left eye 812C of backward-oriented viewer 802 - thus, camera 790C may be labelled with L2; and (iv) the position of camera 790D in Fig. 50B corresponds to that of the right eye 812D of backward-oriented viewer 802 - thus, camera 790D may be labelled with R2.

In Figs 50A-50B, a first displacement vector connecting Ll camera 790A to Rl camera 790B is labeled 726A— a magnitude of the first displacement vector 726A is distoFFESET which is the lateral offset distance between the two cameras 790A, 790B. For example, the first displacement vector 726A is parallel to a plane (e.g. best fit plane) of platform 716A. A second displacement vector connecting L2 camera 790C to R2 camera 790D is labeled 726B— a magnitude of the second displacement vector 726B is distoFFESET For example, the second displacement vector 726B is parallel to a plane (e.g. best fit plane) of platform 716B.

Geometric Conditions of the L1-R1-L2-R2 configuration for four cameras — The four cameras 790A-790D of Fig. 50B are disposed relative to each other in the“Ll-Rl- L2-R2” configuration, which is defined with reference to Fig. 50B to have all of the features:

A. first (Ll) 790A and second (Rl) 790B wide-angle cameras laterally displaced from each other by a lateral displacement distance lat_dist 0 ff S et along a first lateral displacement axis 714A, the first (Ll) 790A and second (Rl) 790 wide-angle cameras being configured to respectively acquire first ( Ll_img ) and second ( Rl_img ) images of a scene, optical axes of the first (Ll) 790A and second (Rl) 790B wide-angle cameras being parallel to each other and normal to the first lateral displacement axis 714A;

B. third (L2) 790C and fourth (R2) 790D wide-angle cameras laterally displaced from each other by the lateral displacement distance lat_dist 0jjset along a second lateral displacement axis 714B parallel to the first lateral displacement axis 714A, the third (L2) 790C and fourth (R2) 790D wide-angle cameras configured to respectively acquire third( 2_/mg) and fourth ( R2_img ) images of the scene, optical axes of the third (L2) 790C and fourth (R2) 790D wide-angle cameras being parallel to each other and normal to the second lateral displacement axis 714B (the axes 714A, 714B being parallel to each other);

C. the first (Ll) 790A and fourth (R2) 790D wide-angle cameras are laterally aligned with each other, the first (Ll) 790A and fourth (R2) 790D wide-angle cameras facing away from each other; D. the second (Rl) 790B and third (L2) 790C wide-angle cameras are laterally aligned with each other, the second (Rl) 790B and third (L2) 790C wide-angle cameras facing away from each other; and

F. each 790A, 790B, 790C, 790D of the wide-angle cameras has a common angle (i.e. 180°+2b of view, which exceeds 180 degrees. Angles of View and Visible Regions of Space with reference to Figs. 50A-50B and

51A-51D— One salient feature of each of cameras 790A-790D is that each camera has an angle of view which exceeds 180°. In embodiments of the invention, the angle of view of each camera equals 790A-790D equals 180°+2b where b is a positive number which satisfies 0°< b<30°. Figs. 51A-51D respectively illustrate the regions of space that are visible by cameras 790A-790D. In particular, the platforms on which the cameras are mounted (i.e. 716A for cameras 790A-790B and 716B for cameras 790C-790D) may occlude certain regions of space. As shown in Fig. 51 A, the region of space physically viewable by camera‘Ll’ 790A is bounded by (i) the marginal ray 890A of camera‘Ll’ 790A and (ii) the second limit 892A of view for‘Ll’due to an occluding presence of platform 716A.

As shown in Fig. 6B, the region of space physically viewable by camera‘Rl’ 790B is bounded by (i) the marginal ray 890B of camera‘Rl’ 790B and (ii) the second limit 892B of view for‘Rl’due to an occluding presence of platform 716A. As shown in Fig. 6C, the region of space physically viewable by camera‘L2’ 790C is bounded by (i) the marginal ray 890C of camera‘L2’ 790C and (ii) the second limit 892C of view for ‘L2’due to an occluding presence of platform 716B. As shown in Fig. 6D, the region of space physically viewable by camera‘R2’ 790D is bounded by (i) the marginal ray 890D of camera‘R2’ 790D and (ii) the second limit 892D of view for‘R2’due to an occluding presence of platform 716B.

Figs. 52A-52C schematically show the four cameras in L1-R1-L2-R2. Although the number of cameras in each of Figs. 52A-52C is relatively small (i.e. only four cameras), a novel solution of the problem of providing a stereoscopic 360° image of a scene from only four cameras is now presented. In embodiments related to Fig. 55A- 55F, this stereoscopic 360° image is fully stereoscopic - i.e. for all angles of a circle, the image provides a‘correct’ and non-zero stereo effect (see Fig. 55F). In some

embodiments, the technology of Figs. 55 A-55F reduces the cost of device capable of acquiring a fully stereoscopic 360 image of a scene, since only four cameras (e.g. rigidly oriented relative to each other) in a L1-R1-L2-R2 configuration are required.

Fig. 52C shows in a single figures all of lines 890A-890D and 892A-892D. These lines describe the regions of space that are visible from each of cameras 790A-790D when in the L1-R1-L2-R2 configuration. Fig. 53A only relates to cameras Ll 790A and L2 790C; Fig. 53B only relates to cameras R2 790B and R2 790D. Fig. 53C relates to all four cameras 790A-790D.

Thus, as shown in Fig. 53 A, (i) a region of space between -180° and -90° -b is visible by camera L2 790C but is not visible by camera Ll 790A; (ii) a region of space between -90° -b and -90° is visible by both camera Ll 790A and camera L2 790C; (iii) a region of space between -90° and 90° -b is visible by camera Ll 790A but is not visible by camera L2 790C; (iv) a region of space between 90° -b and 90° is visible by both camera Ll 790A and camera L2 790C; and (v) a region of space between 90° +b and 180° is visible by camera L2 790C but is not visible by camera Ll 790A.

Thus, as shown in Fig. 53B, (i) a region of space between -180° and -90° is visible by camera R2 790D but is not visible by camera Rl 790B; (ii) a region of space between -90° and -90° +b is visible by both camera Rl 790B and camera R2 790D; (iii) a region of space between -90°+b and 90° is visible by camera Rl 790B but is not visible by camera R2 790D; (iv) a region of space between 90° and 90° +b is visible by both camera Rl 790B and camera R2 790D; and (v) a region of space between 90° +b and 180° is visible by camera R2 790D but is not visible by camera Rl 790B.

In Fig. 53C, the information of Figs. 53A-53B presented in a single figure. For the present specification, (i) an image of a scene acquired by camera Ll 790A is referred to as Ll_Img; (ii) an image of a scene acquired by camera Rl 790B is referred to as Rl_Img; (iii) an image of a scene acquired by camera L2 790C is referred to as L2_Img; and (iv) an image of a scene acquired by camera R2 790D is referred to as R2_Img. These images may be cropped and stitched to form panoramic image(s). Obtaining a 360 stereoscopic image of a scene using only four cameras— In some embodiments, it is a goal to employ four cameras in the L1-R1-L2-R2 (i.e. only four cameras) configuration to acquire 360° stereoscopic image. Two techniques are now discussed - a first stitching technique is discussed with reference to Figs. 54A-54E and a second stitching technique is discussed with reference to Figs. 55A-55F. Both techniques are capable of obtaining a 360° image — according to the first technique there is a positive non-zero stereo effect for most angles of the angle range [-180°, 180°] , and according to the second technique there is a positive non-stereo effector for all angles of the angle range [-180°, 180°] . Thus, the second technique may be said to produce a 360° fully-stereo panoramic image of the scene using only four cameras in the L1-R1-L2- R2 configuration.

In Figs. 54A-54C, (i) the left-eye image is a stitching between images Ll_img, L2_img respectively acquired by cameras Ll 790A and L2 790C; and (ii) the right-eye image is a stitching between images Rl_img, R2_img respectively acquired by cameras Rl 790B and R2 790D. These is shown in Fig. 54A without stitch lines, in Fig. 54B according to one possible set of stitch lines, and in Fig. 54C according to another possible set of stitch lines. Assuming the scene which is imaged only includes the four objects 770A-770D of Fig. 48, the generated left-eye and right-eye image (e.g. according to stitching-line placement of Fig. 54C) is shown in Fig. 54D.

Although this succeeds in acquiring a 360° panoramic image of the scene that is stereoscopic for most angles, the image of Fig. 54D is not a 360° fully-stereo panoramic image of the scene.

Consider object 770A as shown in Fig. 54D and as viewed by the human viewer in Fig. 49A. In Fig. 49A, the human viewer’s left eye 754 views object 770A towards the right and the human viewer’s right eye 758 views object 770A towards the left - this is replicated in Fig. 54D where a position of object 770A in the top image (LEFT EYE) is slightly to the right of a position of object 770A in the bottom image (RIGHT EYE). The effect of Fig. 49C is also replaced in Fig. 54D for object 770C. However, for object 770B and 770D the effects of Figs. 49B and 49D are not replicated - this is because (i) a position of object 770B in the top image (LEFT EYE) is NOT slightly to the right of a position of object 770B in the bottom image (RIGHT EYE), thereby failing to reproduce the required stereo effect of Fig. 49B; and (ii) a position of object 770D in the top image (LEFT EYE) is NOT slightly to the right of a position of object 770D in the bottom image (RIGHT EYE), thereby failing to reproduce the required stereo effect of Fig. 49D.

Fig. 54E roughly and qualitatively shows a strength of the stereo effect for the technique of Fig. 54C, whose results are shown in Fig. 54D. As stated above and as is clearly evident from Fig. 54D, (i) the image of Fig. 54D is stereoscopic for most angles; and (ii) nevertheless, the image of Fig. 54D is not a 360° fully-stereo panoramic image of the scene. For angles close to -90° and 90° there is no stereoscopic effect, or it might even be regarded as a negative stereoscopic effect.

Contrast between the stitching techniques of Figs. 54 and 55— As noted above, Figs. 54A-54C relate to a first stitching technique where: (i) the left-eye image is a stitching between images Ll_img, L2_img respectively acquired by cameras Ll 790A and L2 790C; and (ii) the right-eye image is a stitching between images Rl_img, R2_img respectively acquired by cameras Rl 790B and R2 790D.

In contrast, for the LEFT eye, Figs. 55A-55C relate to a second stitching technique where: i. an angle a has a value 0< a <b; ii. over the range between -180° and 180° degrees, the left-eye stitched 360° stitched panoramic image (shown in Fig. 10A) is an ordered stitching L2_img / R2_img / Ll_img / Rl_img / L2_img where:

A. a stitch line between L2_img and R2_img is at located at -90°- a;

B. a stitch line between R2_img and Ll_img is at located at -90°+ a;

C. a stitch line between Ll_img and Rl_img is at located at 90° - a; and

D. a stitch line between Rl_img and L2_img is at located at 90°+ a.

For the right eye, over the range between -180° and 180° degrees, the right-eye stitched 360° stitched panoramic image (shown in Fig. 10B) is an ordered stitching R2_img / Ll_img / Rl_img / L2_img / R2_img where:

A. a stitch line between R2_img and Ll_img is at located at -90°- a;

B. a stitch line between Ll_img and Rl_img is at located at -90°+ a;

C. a stitch line between Rl_img and L2_img is at located at 90° - a; and

D. a stitch line between L2_img and R2_img is at located at 90°+ a.

For the four-object scene of Fig. 48, the result of the technique of Figs. 55A-55C is shown respectively in Figs. 55D-55E. In contrast to the result of Fig. 54D, in panoramic image of Figs. 55D-55E is a 360° fully-stereo panoramic image of the scene.

Fig. 55F roughly and qualitatively shows a strength of the stereo effect for the technique of Figs. 55A-55C, whose results are shown in Figs. 55D-55E. In contrast to the image of Fig. 54D, (i) the image of Figs. 55C-55D is a 360° fully-stereo panoramic image of the scene, and is stereoscopic for all angles. This includes angles close to - 90° and 90°— in Fig. 54D there was no stereoscopic effect (or a negative effect) for these angles. In contrast, as evident from Figs. 55D-55E a stereoscopic effect is produced for these angles.

As shown in Figs. 56, 57A-57B, 58A-58B, in some embodiments teachings of (i) the embodiments‘asymmetric cropping’ embodiments (see, for example, Figs. 36-44) may be combined with (ii) teachings related to L1-R1-L2-R2 embodiments. Thus, instead of a‘physical visibility’ (e.g. due to the presence of the platform) of lines 892-892D of Fig. 52C, the angle of view may be asymmetrically reduced. For example, in Fig. 56 there is no cropping (though there could be come cropping in other examples) for the‘left extreme’ of camera the post-cropped angle of view of camera 790A (which would remain marginal ray 890A), while there is cropping by gamma for the second limit of view. Fig. 57A shows when this is applied to all four cameras of the L1-R1-L2-R2 configuration. Fig. 57B shows exterior and intermediating regions. For example, the post-cropping visibility may be as illustrated in Figs. 57A-57B and 58A-58B.

In some embodiments positions and orientations of the first (Ll) 790A, second (Rl) 790B, third (L2) 790C, and fourth (R2) 790B cameras define:

(A) an intermediating region-of-space 300 between the first and second cameras and bounded by optical axes of the first 790A and second 790B cameras, the

intermediating region-of-space 300 also being disposed between the third 790C and fourth 790D cameras and bounded by optical axes of the third 790C and fourth 790C cameras; and

(B) first 301A and second 301B exterior regions-of-space respectively bounded by the optical axes of the first and second cameras, wherein the first 301 A exterior region-of-space is local with respect to both the first 790A and fourth 790D cameras and remote with respect to both the second 790B and third 790C cameras, and the second 301B exterior region-of-space is local with respect to both the second 790B and third 790C cameras and remote with respect to both the first 790A and fourth 790D cameras;

In some embodiments, the first ( Ll_img ), second ( Rl_img ), third (L2_img) and fourth (R2_img) images are all asymmetrically cropped to all have a common post cropping effective angle-of-view pc_aov (e.g. exceeding 180 degrees) so that for each of the first ( Ll_img ), second ( Rl_img ), third (L2_img) and fourth (R2_img) images, i. an extent of cropping from a respective remote exterior region-of-space exceeds an extent of cropping from a respective local exterior region-of-space; and ii. an extent of cropping from the intermediating region-of-space exceeds an extent of cropping from the respective local exterior region-of-space.

A imaging apparatus comprising: a. first (Ll) 790A and second (Rl) 790B wide- angle cameras laterally displaced from each other by a lateral displacement distance lat_dist 0 ff set along a first lateral displacement axis 714A, the first (Ll) 790A and second (Rl) 790 wide-angle cameras being configured to respectively acquire first ( Ll_img ) and second ( Rl_img ) images of a scene, optical axes of the first (Ll) 790A and second (Rl) 790B wide-angle cameras being parallel to each other and normal to the first lateral displacement axis 714A; b. third (L2) 790C and fourth (R2) 790D wide-angle cameras laterally displaced from each other by the lateral displacement distance lat_dist 0 ff Set along a second lateral displacement axis 714B parallel to the first lateral displacement axis 714A, the third (L2) 790C and fourth (R2) 790D wide-angle cameras configured to respectively acquire third( 2_/mg) and fourth (R2_img) images of the scene, optical axes of the third (L2) 790C and fourth (R2) 790D wide-angle cameras being parallel to each other and normal to the second lateral displacement axis 714B, wherein: i. the first (Ll) 790A and fourth (R2) 790D wide-angle cameras are laterally aligned with each other, the first (Ll) 790A and fourth (R2) 790D wide-angle cameras facing away from each other; ii. the second (Rl) 790B and third (L2) 790C wide-angle cameras are laterally aligned with each other, the second (Rl) 790B and third (L2) 790C wide-angle cameras facing away from each other; and iii. each 790A, 790B, 790C, 790D of the wide-angle cameras has a common angle of coverage 180° + 2b ; iv. a value of b exceeds 0° and is at most 30°; c. an image-processing module for forming left-eye and right-eye 360° stitched panoramic images as follows: i. an angle a has for example a value 0< a <2*b; ii. over the range between -180° and 180° degrees, the left-eye stitched 360° panoramic image is an ordered stitching L2_img / R2_img / Ll_img / Rl_img / L2_img where: For example A. a stitch line between L2_img and R2_img is at located at -90°- a; For example B. a stitch line between R2_img and Ll_img is at located at -90°+ a; for example C. a stitch line between Ll_img and Rl_img is at located at 90° - a; and for example D. a stitch line between Rl_img and L2_img is at located at 90°+ a; and for example iii. over the range between -180° and 180° degrees, the right-eye stitched 360° panoramic image is an ordered stitching R2_img / Ll_img / Rl_img / L2_img / R2_img where: for example A. a stitch line between R2_img and Ll_img is at located at -90°- a; for example B. a stitch line between Ll_img and Rl_img is at located at -90°+ a; for example C. a stitch line between Rl_img and L2_img is at located at 90° - a; and for example D. a stitch line between L2_img and R2_img is at located at 90°+ a.

For example and as shown in Figs. 55E-55F, a fully-stereo (i.e. stereoscopic over 360 degrees) panoramic 360 degree image of the scene may be produced.

The present invention has been described using detailed descriptions of embodiments thereof that are provided by way of example and are not intended to limit the scope of the invention. The described embodiments comprise different features, not all of which are required in all embodiments of the invention. Some embodiments of the present invention utilize only some of the features or possible combinations of the features. Variations of embodiments of the present invention that are described and embodiments of the present invention comprising different combinations of features noted in the described embodiments will occur to persons skilled in the art to which the invention pertains.

In the description and claims of the present disclosure, each of the verbs, "comprise", "include" and "have", and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements or parts of the subject or subjects of the verb. As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a marking" or "at least one marking" may include a plurality of markings.

In the description and claims of the present disclosure, the following terms are used interchangeably with each other and have identical meanings when used with the same respective reference numbers in the drawings and also when used without any reference numbers: (a)“camera 520” and“ultra-wide-angle camera 520”; (b)“lens 530” and“ultra- wide-angle lens 530”; (c)“handle 510” and“handle portion 510” and “elongated handle portion 510”; (d)“frame 570” and“pivot assembly frame 570” and “docking frame 570” and“frame member 570”; (e)“pivot member 541” and“pivot attachment member 541”; (f)“a camera platform assembly 501” and“pivot assembly 501”; and (g)“camera apparatus 500” and“imaging apparatus 500”.

94