Selasa, 27 Maret 2012

Building Mockups


Part 12  Mockups
Author  AndrewS 

History   My initial interest in camera ergonomics was provoked by the experience of using a Canon EOS 450D and Panasonic G1.   I bought these two cameras  in a quest to downsize from a kit based on a Canon EOS 40D,  a medium size DSLR.  I found the smaller cameras cramped, awkward and difficult to use compared with the 40D. I wondered if it was possible to design a camera about the same size as the G1 but with better ergonomics.
Photo 1 First Mockup Hold
Photo 2 First Mockup Size
So I built my first mockup, shown in Photos 1 and 2.  This turned out slightly smaller than a G1 but provides a hugely improved holding experience. It is a "proof of concept" design, the shape of which bears only passing resemblance to any existing camera. The "concept" which was tested and proved is that the ergonomic problems I had been experiencing with small cameras were due to design rather than size.  Furthermore I demonstrated to my own satisfaction that it is entirely possible to build excellent ergonomics into small cameras if an ergonomic centered  approach is the principal driver of  the design process.
Photo 3 Cylinder Eye level
Photo 4 Cylinder Monitor View
In Part 8 of this series I noted that an electronic camera could be designed to any shape at all. So I trialled a variety of shapes. One which showed some promise at the preliminary mockup stage was "the cylinder".  See Photos 3 and 4.   This is not a new idea. The Canon Jet/Epoca  of 1992 used a cylindrical format. In 2012 the designer Jean-Michel Bonnemoy proposed another variant on the same theme. A camera could be made this shape but I found the layout is not readily adaptable to interchangeable lenses and  the unit must be held up to the eye, with limited opportunities for alternative holding positions.  So I settled on a fairly conventional shape for it's superior ergonomic potential.

Over the last two years I have made five "grip only"  mockups, shown in Part 7 of this series and five full camera mockups, two of which are presented in detail here.  They represent the mirrorless interchangeable lens camera type which I have chosen as I believe it is the way forward for camera design.  Along the way there were many false starts, mockups abandoned part built and many changes to handle shape and size, button location and  viewfinder location.
Concepts   The process of making mockup cameras has taught me a great deal about ergonomic aspects of camera design.  The experience is liberating but also demanding.   I can and did,  put the viewfinder anywhere, the shutter button anywhere, all the other controls anywhere, without restriction. I can and did,  make the device and all its parts any size and shape at all. From all this some useful ideas have emerged.
Photo 5 Size Determinants
Camera body size  Is determined by the following factors:  See Photo 5
* Monitor module size. This in turn is determined by the actual dimensions of the preview/review image, aspect ratio, location of camera status data information on or below the image, fixed, flip up/down or full swing out and swivel type.
* Vertical height of the eye level viewfinder.  This in turn is partly determined by the size of the EVF chip, OVF module or DSLR pentaprism/mirror. It is also greatly affected by the designer's decision about the soft rubber eyecup around the viewfinder, how large this should be, what shape and whether it is fully contained within the dimensions of the camera body or allowed to protrude beyond the body.
* Width of the control panel. This is the part of the rear of the camera between the right side (as viewed by the user) of the monitor module and the right edge of the camera body.
* Inset of the optical axis from the left side of the camera body.  The further the optical axis is from the left side, the wider the body must be to make room for a handle and thumbrest.
* Handle and thumbrest size and configuration.
Lens size   Is principally determined by imaging sensor size.
Size does matter after all    I discovered that with good design a small camera can have excellent ergonomics. But there is a size range below which it becomes increasingly difficult to achieve functional harmony between the device and the hands which operate it. That size is achieved by the small mockup described in this section. With smaller units,  holding, viewing or operating inevitably suffer as there is simply not enough device real estate available for the requisite user interface modules.
Photo 6 SLR Style Schematic
Photo 7 Rangefinder Style Schematic
Shape matters   Very early in the gestation process of a camera design decisions are made about the basic shape and layout of the device. These decisions are "baked in" to the design and are the main factors which determine ergonomic performance of the final product.  For instance some mirrorless cameras have the shape of a small SLR even though there is no functional reason for them to be that shape.  See Photos 6 and 7.  These schematics illustrate two cameras, each the same width and height.  The Rangefinder Style version has the viewfinder optimally located and permits greater shutter button height which in turn allows the creation of a more secure handle design. In addition the lens axis on the rangefinder version can be moved further to the left, making room available on the front of the body for a more substantial handle.
Photo 8 Crafting Mockup
Method   I start with some basic dimensions of monitor module width and height, viewfinder height, body depth and rear control panel width. These derive from actual cameras which I have used.  I cut a piece of wood to the dimensions of the basic body then start crafting a handle and other parts. These can be seen in Photo 8.  I cut and shape the handle separately then screw and glue it to the body for final shaping. I selectively cut, file and sand some parts of the structure, then add to other sections with polyester cement, gradually building up a shape which slips easily into my hand when it is held in the "half closed, relaxed" position described in Parts 4 and 6 of this series. There is a great deal of trial and error in the process with many suboptimal versions being discarded. User interface modules are positioned and repositioned sometimes moving by as little as one millimeter to get them in the right place for optimal finger access.
The mockups may look a bit rough but the precise size and shape of every part and the precise size, type and  location of every interface module have been very carefully evolved over many trials.
Shaping is always guided by the "Form follows fingers"principle.
The lenses are plastic peanut butter jars chosen for size to represent typical standard zooms.
Aesthetics    I do take a hard line on ergonomic excellence and have  a low tolerance for ergonomic mistakes. However I am also acutely sensitive to aesthetic aspects of design.  This sensitivity is incorporated in all my mockups which emerge from the creative process with their own unique style signatures.
Exposition    Here I present two mockups, identified as "small" and "large".
Photo 10 Small Front
The one named "Small" is a good size for a consumer level mirrorless interchangeable lens camera with micro four thirds or APS-C sensor.  I made the body depth suitable for a flangeback distance in the range of about 20-25 mm. Dimensions allocated to the monitor would provide for a fixed type with an image diagonal of 75 mm or a swing out type with smaller image size.  A swing out monitor with a  75 mm image diagonal would add approximately 6 mm to the height and 12 mm to the width.  Overall dimensions of the body are Width 124 mm, Height 80 mm, Depth 55 mm. This is only marginally larger overall than a Panasonic G3 yet delivers dramatically better ergonomics. The control panel is more than twice as large, allowing a comfortably angled thumb rest and much larger control modules. The handle is fully contoured and accepts a full five finger grip with average size adult male hands. The main control dial is comfortably operated by the right index finger. The lens shown is about the size of a typical f3.5-5.6 standard kit zoom.
Photo 9 Height Comparison
The "Large" mockup represents an advanced level  mirrorless camera suitable for micro four thirds, APS-C or even 24x36  mm full frame, with large aperture zoom lenses.  The dimensions are Width 142 mm, Height 90 mm, depth 68 mm. Note that the height is the same as a Panasonic GH2 and the depth slightly less. This means the large mockup will easily fit into the same space in a camera bag as the GH2.  See Photo 9.  Yet it has a much larger and more comfortable handle, much larger and more naturally angled thumbrest and a control panel almost three times the area of that on the GH2. All this adds up to a massive improvement in holding, viewing and operating.  The lens shown is about the size of a pro style f2.0 - 2.8 zoom.
Photo 11 Large Front
The basic shape of both mockups could be described as "rangefinder style with handle and thumbrest", as distinct from "DSLR" shape. This was chosen for fundamental ergonomic reasons. The eye level viewfinder is located near the top left of the body (see Part 9). Both mockups share numerous design features.
They each exemplify a set of ergonomic design principlesdetailed in Parts 1-11 of this series. In summary these are
* Style follows function
* Form follows fingers
* Specific operational tasks of the four phases of camera use. These are Setup, Prepare, Capture and Review.
* Specific requirements and analysis of the three tasks of camera operation in the Capture Phase. These are Holding, Viewing and Operating.
The EVF is located as described in Part 9.
The Handle is located and shaped as described in Part 7.  Both mockups use the parallel handle for its distinct ergonomic advantages.
The Thumbrest  I regard the thumbrest as of crucial importance in the total ergonomic design. It is the critical feature which if correctly designed allows the user to hold and operate the device simultaneously. If ill conceived and executed it prevents the user from so doing. Many compact and compact system cameras are so small that there is not enough width on the rear of the camera to fit a properly shaped and positioned thumb rest. So the user is offered no thumbrest, or a control dial where the thumbrest should be, or a vestigial bump way over on the right side (as viewed by the user) of the camera leading to a cramped, unnatural hand/finger position.
Photo 14 Small Hold
Photo 15 Large Hold
Hold Position, right hand   In Photos 14 and 15 you can see the basic right hand camera hold position. Camera operation starts from this position. Note the relaxed half closed posture of the hand as described in Part 4. Note that there are no interface modules located where the thumb or middle, ring and little fingers rest. The AF start button on the large version is beneath the terminal phalanx of the thumb but will not be pressed accidentally as a small but definite flexion of the interphalangeal joint is required to impart pressure on the button. There is a quartet of modules beneath and ready to be operated by the index finger which does not have to grip the camera.  

Corners  The exact amount and shape of rounding the corners was determined after much experiment. You will notice there are no sharp edges or corners on either mockup and few flat planes other than the monitor and base. This shape was determined by feel not some preconceived style. However the result turns out to have it's own unique style which is not derivative of some abstract concept or a 1980's film camera.


Photo 12 Small Rear
Photo 16 Small Top
User Interface Modules   A great deal of thought and experiment has been expended in designing the type, position and function of the interface modules (see Part 10). They are located in high, medium or low priority camera real estate areas (see Part 8) , depending on their function in the four Phases of use. Photo 16,  Small Top and  Photo 12,  Small Rear show the interface modules.
The interface modules are in ergonomic functional groups.
Group 1  Right index finger, Capture Phase.   See Photos 14, Small Hold and  15, Large Hold.    The modules in this group are Shutter Button, Main Control Dial, ISO button and Exposure Compensation button .  Once the camera has been configured in Setup and Prepare phases it can be driven with this group while continuously viewing through the EVF and without changing grip with the right hand. The left hand will take care of zooming and manual focus if required. The right index finger can start/lock autofocus and auto exposure, change aperture in A Mode, shutter speed in S mode and program shift in P mode, change exposure compensation then capture the image. In M Mode pressing the Exposure Compensation button toggles between aperture and shutter speed adjusted by the main dial. One does not need any exposure compensation function in M Mode. It is not necessary to have one dial for aperture and another for shutter speed.
Group 2  Shooting Mode Dial and Drive Mode lever. This "set and see" control layout makes all the main shooting modes and drive modes visible at a glance and able to be altered quickly and directly, with no need to enter a menu. With more space available the large mockup also has a focus mode dial/area lever located on the right side of the top plate.  These dials/levers control  major Prepare Phase settings.
Group 3  Right thumb, Capture Phase.  There are only two modules in this group, the  JOG lever and the AF start/lock button. These are located so they can be operated by the right thumb without disrupting grip. This means the thumb has only to move a very small distance left or right and only a very slight amount of flexion of the interphalangeal joint is required. (see Part 6).
The JOG lever is a major interface module, the function of which is mode dependent. See Part 10.  I believe all cameras designed for controlled use should have a well designed JOG lever. It is imperative that the JOG lever be correctly located in three dimensions.
Photos 12 and 16  show correct location of a JOG lever on the small mockup. My right thumb goes directly onto the JOG lever  when moved slightly to the left of base position. Note also in Photo 16  the JOG lever stands 6 mm proud of the camera back. This means the ball of user's thumb will press directly on the JOG lever with the thumb straight and without impinging on any other control module.
I regard the  AF start/lock button as another vital interface  module on a camera designed for user control. Unfortunately it is missing from many cameras. On the mockups it is positioned so it will be activated by a small movement of the right thumb, but will not be easily activated inadvertently. The AF start/lock button should be user configurable as auto exposure lock/autofocus lock, AF start or Video start with the shutter button function configuration to complement that of the AF button.
Group 1 and 3 modules are placed at high priority real estate locations on the camera.
Group 4   With reference to Photos 12 and 16, this group consists of button #3 on top and buttons #5, 8, 9, 10, 11, 12, 13 on the rear. Interface module layout on the large mockup is very similar but without the numbers.   These are direct interface modules for the Prepare Phase of use.  See Part 11.  On a production camera several of these would have a label, some would be un labelled.  All would be able to accept a user assigned function  from the full list of functions of which the camera is capable.
These are in medium priority real estate locations.
Group 5   With reference to Photo 12  this group consists of  buttons #13, 14, 15  applicable to the Review Phase of use.
Group 6  In Photo 12  This group includes buttons  #4, 17, 16.  These are placed in low priority real estate locations and are applicable to the Setup Phase of use.  I would  allocate the Main Menu to #17 and use #4 and 16 for user configured tasking.
Summary   These mockups incorporate the results of  my thinking and research on ergonomics described in Parts 1-11 of this series. They represent modern electronic cameras which encourage the practiced user to drive them like a sports car. Fast, accurate, enjoyable.










Jumat, 02 Maret 2012

Setup Prepare Capture and Review Phases of Use


CAMERA ERGONOMICS
Part 11,  Setup, Prepare, Capture and Review phases of camera operation
Author  AndrewS
Photo 1 Landscape View Hold
In Part 5 of this series I identified the four basic phases of camera operation. These are Setup, Prepare, Capture and Review. 
Setup Phase. Time span: hours or more.
Attention priority: Camera directed. The user's attention is directed to the camera.
Tasks: These are the  tasks which need to be completed on first acquiring the camera and occasionally thereafter.  The user has a multitude of settings to make.     Snapshooters and controllers alike need to wade through the selection process.
Each camera will have a different list of selections to make but items which appear commonly include: time and date, language, file and folder settings, display settings, color space, RAW capture, sounds, button tasking, aspect ratio, image stabiliser, sensor cleaning, screen and VF brightness/contrast/color, AF priority, MF assist, EV steps, AEB settings, noise reduction, AF assist lamp, video AE mode, movie size/type, metering and focussing for movie, etcetera....etcetera....etcetera...... sometimes there are hundreds of decisions to be made, often involving interdependencies.
Photo 2 Portrait View Hold
Control Type:  Access is usually via a menu, activated by pressing a button, then scrolling through items with a control dial, jog type lever, four way controller or touch screen.
Control Location:  The Menu button is best located in a low value location on the camera.
Challenges:  Key words are clarify, simplify and  coordinate.  Some cameras have a labrynthine, convoluted menu system which is extremely difficult to use. It is most desirable to give users the option to allocate relatively frequently used items to a user sub menu and for this submenu to be first in the scrolling queue by default, or accessible by a separate button.
Example:  I recently bought a much advertised camera which had attracted favourable reviews. If I set Medium image size, RAW+JPG  in the setup menu, the camera would allow ISO 100--3200 and Dynamic Range (DR) of 100%--400%. But if I selected just RAW in the setup menu the camera allowed ISO 400--3200 and DR 100%. Then if I set Large image size yet another different set of available options was presented.  I could not fathom the logic of this at all. The same camera had many unexpected  interdependencies between Image Size, Mode Setting, RAW, JPG, ISO, DR and Shutter Speed, the purpose of which I could not grasp.  Worse, I frequently wanted a combination of settings which that camera would not permit.  Some interdependencies are logical. For instance if the user sets one of the fully automatic modes, images will be saved in JPG format, so RAW and  Adobe Color Space will be unavailable. But this camera made me feel like Alice exploring Wonderland, having no idea what  the next surprise might be. I was not amused to find myself an involuntary part of the manufacturer's beta testing programme for that product.


Photo 3 Hold for Setup Prepare Review
Prepare Phase.  Time span: minutes.
Attention priority: Camera directed.
Tasks: In the period just before starting to take photos the user will want to ready the camera for the present environment. It might be landscape, macro, a party, night work, little athletics or whatever,  each requiring different settings.
Snapshooters will set the camera to fully automatic mode or one of the "Scene" modes.
Controllers have more work to do.  Items  usually set at this stage include:   P,A,S or M  shooting mode,  drive mode,  focus  mode and area, metering mode, white balance, flash mode, OIS, video modes and  dynamic range.  Experienced users will set ISO at this stage but may need to adjust it in Capture Phase. 
Control Type: There may be many settings to adjust in a short time so the controls need to provide quick access. Each of several approaches can work efficiently. The only hard and fast rule I would suggest is to keep Prepare Phase settings out of  the main menu. There is considerable benefit to having a physical Shooting Mode dial and Drive Mode selector. These allow the user to see and change current settings quickly, even before the camera is powered up.  With practice, camera work is faster when focus mode, metering mode, white balance and flash mode are allocated to hard buttons. The actual function of each button should be user selectable.   Other settings can be allocated to a "Quick Menu". The items in this mini menu should be user selectable, to accommodate individual styles of operating. Operating the quick menu  could  involve pressing a button then scrolling through options with a four way controller,  joystick or touch screen.
Control location: Hard controls for this phase are best allocated to medium value locations.
Challenges: The main challenge for camera designers is to clearly understand which tasks belong to the Prepare phase and which to the Capture phase. Making this explicit can wonderfully clarify what types of  controls are required and where they are best located.
Example 1:  Touch screen controls are all the rage at the moment. They might  be useful in the Setup, Prepare and Review Phases of camera use. In these phases the user is looking at the camera, not the subject and is usually holding the camera away from the eye. Having said that I have used several cameras with touch screen controls and have found the ideamore appealing than the experience. In the Capture Phase they are just a nuisance, completely inaccessible when using an eye level viewfinder and a distraction from the capture process when viewing on a monitor, as one's fingers are all over the image preview.
Capture Phase
The Capture Phase of camera operation provides the most critical test of a camera's ergonomic design and the user's ability to operate the device effectively.
The time span is in seconds.
Snapshooters will want to hold the camera steady, frame up the subject with an effective preview system then press the shutter button to capture the image.
Controllers have a more complex task, with three components: Holding, Viewing and Operating.
Holding  The user should be able to hold the device steady with both hands, and operate the camera without changing grip with either hand.  This is discussed in Parts 4, 6 and 7.
Viewing  Please refer to Part 9.  The best ergonomic arrangement is provided by a camera having both a rear monitor and a built in eye level viewfinder. Electronic viewfinders have real and potential ergonomic advantages over optical types.
The tasks of viewing are quite demanding as there is a great deal of visually transmitted information to be assimilated.
The subject must be identified and framed,  zooming as required.  The best moment for capture must be decided.
Camera status data must be noted.   Primary capture data must be easily visible at all times on a status bar below the image preview. This includes ISO, Shutter Speed and  Aperture. Other data preferably located on this status bar includes Shooting Mode, Exposure Compensation, Flash Exposure Compensation, Battery status, Shots remaining on memory card.
Frequently viewed data superimposed on the image preview includes Electronic Level indicator, Camera shake warning, Composition grid lines, Active autofocus area position and size, Drive Mode .
Attention priority is split between the subject and the camera. The best ergonomic result will be given by a camera which with practice can be driven like a motor vehicle. The controls can be operated by touch while looking at the subject in the viewfinder.
Control  type  Optimum user control is provided by hard interface modules as discussed in Part 10.
Control location  As discussed in Part 8, interface modules must be located in high value areas of the camera.
Operating   The tasks of operating a camera in the Capture phase are:
Focussing 
* Decide AF or MF
* Set size and position of active AF/MF  area.
* Start/Lock AF/MF
Metering
*Activate metering
* Check and modify  the firing solution (Aperture/Shutter Speed/ISO)
* Check and adjust exposure compensation.
Capture 
* Fully depress the shutter button (or other assigned interface module) to capture the image.
Challenges
Camera designers need to listen to the voice of ergonomics, often drowned by the clamour of styling, marketing, engineering and the latest crop of electronic functions. They need to make cameras which provide an excellent user experience not just a passing enthusiasm. 
The critical test
If a camera can pass the following critical triple function test in the Capture Phase of use it is well on the way to a high rating for ergonomic performance.
(Holding):  The user should be able to hold the camera steady with both hands and while doing so, without changing grip with either hand,
(Viewing):  Clearly frame the subject through the viewfinder and see status indicators for primary capture parameters in a panel beneath the image preview, and secondary capture parameters beneath or on (Active AF area)  the image preview and  while doing so,
(Operating):  Adjust primary and secondary exposure and focus controls, start and lock autofocus or manual focus, activate and lock exposure, then capture the image.
In my very long experience of using many types, makes and models of camera I have found very few which actually pass this test. The really disappointing thing is that in the era of electronic operating systems it is quite easy for any  manufacturer capable of producing a camera to make one which does pass the test. It is no more expensive or difficult to make a camera with excellent ergonomics than one with poor ergonomics.
Review Phase
Some photographers like to review every shot they make, others are content to load images to an editing programme and view them in that setting.
The camera should cater for all tastes
Operating requirements are much less time and information pressured than is the case with Capture Phase.
Time span is seconds to minutes.
Attention priority is camera directed.
Control type  is one or more hard interface modules.
Control location  is in a low medium priority location on the camera.
Tasks 
* Call up captured images singly or in groups.
* Enlarge and scroll around selected image(s).
* Check for focus, composition, exposure, subject attributes.
* Delete, tag or modify images singly or in groups.
Challenges  These are similar to those of the setup phase, clarify, simplify, coordinate. Review Phase functions need to respond to user's diverse requirements. To achieve this all review functions should be user selectable.
Example  A new camera model  released in 2011 did not provide any user selectable option to disable quick review on the monitor after each exposure. This markedly slowed shot to shot times on an otherwise responsive camera producing a chorus of well deserved complaints on user forums. A firmware update some months after release failed to rectify the problem.










Operating Systems


CAMERA ERGONOMICS
Part 10   Operating Systems
Author  AndrewS



Photo 1 Control dial Index finger 1
Users and cameras      In ergonomic terms there are two main groups of camera users, snapshooters and controllers. This has led to the development of two main camera types. For snapshooters there are cameras with a simplified user interface and no eye level viewfinder. For controllers there is a range of cameras with a more fully featured user interface including an eye level viewfinder.  Snapshooters can readily use fully featured cameras by selecting one of the fully automatic modes, thereby disabling many of the hard control modules and perhaps electing not to use the eye level viewfinder.  
This discussion of operating systems assumes a camera designed for full user control of all functions.  It has both monitor and eye level viewfinder and a full complement of interface modules,  allowing a high level of communication with and control of the photographic process.
Historical note       For most of the history of photography cameras have been controlled by mechanical connections between the operator and the device. This constraint limited the options available to designers. The emergence of electronic operation has brought greater design freedom but also much more complexity. Paradoxically, the electronic era has made the camera designer's task more difficult. The freedom to make a camera any shape at all, to use any kind of interface technology and to locate control modules anywhere on or off the camera forces the designer to make decisions about all these things.
Communication   Communication is a two way street.  The camera needs to clearly present status information to the user. The user must communicate his or her instructions to the camera quickly. Mechanical rangefinder and SLR cameras of the mid twentieth century era managed this task quite well. Lens aperture, focussed distance and depth of field were directly visible on the lens barrel. Shutter speed was displayed on a top dial, as was film speed.  A camera like this gave the operator a direct readout of and direct control of current settings for primary exposure and focussing variables.  
For the photographer willing to practice the skills required, a camera like this provided a very satisfying user experience and good photographs too.
Electronic cameras in the early part of the twenty first century present the operator with a hundred times as much information. They also demand a hundred times as many responses from the user with decisions about settings and image capture options.
The electronic revolution which some hoped would simplify camera operation has had the opposite effect.


Configuration   In the days of mechanical connections every button or dial always did the same thing.  On an electronic device every user interface module can be programmed to carry out any task of which the device is capable.    Camera designers need to use this capability so a user can programme the camera to operate the way that particular individual wishes. When a camera offers 500 or more options for various combinations of settings affecting every aspect of operation it is essential that the user be able to consign the majority of these to a menu and assign direct access only to those required for immediate use in the capture phase of operation. Each user will have a different idea about which items require direct access and that will change with time and experience.
My view after using many cameras over a 60 year period is that camera manufacturers in the early part of the twenty first century have been extraordinarily slow to understand the  importance of communication and configuration.
Example   Examples abound but here is one just to make the point.  In a major corporate initiative, a well known camera maker [Nikon] introduced a totally new model with a new lens mount [Nikon 1, CX]. The camera has something which looks like a main mode dial, located where it is easy to see, operate and by the way, bump accidentally. One of the options on that dial is called "Motion Snapshot". This makes a short, high speed video and  takes a still photo. On playback you get a slow motion version of the video, the still photo and  music. The same camera buries the main shooting mode and ISO in a menu. Some people might think that Motion Snapshot mode is a great idea, at least until the novelty wears off.  But then they are stuck with the manufacturer's preset dial/button function allocations which may not suit at all.  
A better approach would have been to add more items to the shooting mode dial  and allow the user to select functions allocated to the dial as well as the various buttons.
Generic user / device interface options   What kinds of generic systems  might allow the camera and user a two way exchange of data and  control ?
Voice activation  The technology exists now (2012) for cameras and users to communicate by voice. The idea may seem attractive but I have not seen any reports of actual use. One potential disadvantage of  this method is neurophysiological. It is usually faster to do something than to utter a voice command for the same thing. The reason for this is that translating a thought into words is a complex task. The thought has to be transferred to the appropriate part of the cerebral cortex, coded into words, transferred to another part of the cortex then onwards to the voice muscles.  Simply doing the thing avoids the neurophysiological data processing required to express the thought in words.
Imagine trying to steer a car by voice command.  "....turn left now car.........no, no, that's too far, come back a bit..........oops..........crash.
Eye control   Some years ago Canon developed a system by which the user could look at the part of the image required to be in focus and by so doing move the active autofocus area. This actually worked, well it worked for me anyway, but perhaps not for others because the feature did not last long in Canon's lineup.
Touch screens   The electronic corporations which make camera monitors also make touch screen devices such as smart phones and multifunction tablets. It is no surprise therefore that touch screen controls have found their way into cameras. But a camera works in a  fundamentally different way from a multifunction tablet. The user looks at the subject through the camera and operates the controls preferably without having to look at them. In this respect using a camera is more like driving a car than operating a multifunction tablet.
A touch screen is  inaccessible when an eye level viewfinder is in use. Even when using monitor view, operating controls on a touch screen poses difficulties.  The user has to remove one or the other hand from holding the camera in order to reach the screen which means the camera is no longer being held steady. Then putting fingers over the screen makes it difficult to see the subject.  And, of course you get finger grease all over the screen.
Touch screen operation might be quite feasible for a camera supported on a tripod.
Interface modules (IM)   Voice activation, eye control and touch screens sound promising but in real world use hard physical controls prove to be the most suitable for camera operation. By this I mean discrete things on which hands and fingers are laid.   These include buttons, dials, sliders, levers, lens collars, rings and JOG type devices.
As a group I call these interface modules. My apologies for this bureaucratese sounding terminology but the words do express my meaning.
There are several types of  IM. These include
* Set and See   The set and see module can be a dial, lever, slider, ring or collar. It has clearly visible markings so the user can see the current setting at a glance. Shooting Mode, Drive Mode, ISO, Shutter Speed and Exposure Compensation are typical parameters assigned to a set and see module.  IM's of this type have the virtues of direct readout  and direct user control. They are not usually visible with the eye level viewfinder in use.  However some can be readily adjusted by feel while looking through the eye level viewfinder with the set value duplicated in the viewfinder. So most are best used for adjustments in the Prepare Phase of operation but some can be used effectively in Capture Phase.
* Single function, preset   Many cameras use modules of this type. If I had any say in camera design I would abolish these completely. In my view there is no excuse in the electronic era for modules having a single function determined by the manufacturer.
* Single function, user assignable  These are slowly becoming more popular, but should be universal. More, it should be possible for the user to assign to any module any function from a complete menu of all possible functions. Electronic cameras offer such a plethora of selectable functions it is not remotely possible for the maker to guess which ones any specific individual user will want to have at his or her fingertips.  Furthermore many users will want to change their module function assignments after experience with the camera.
* Mode dependent function, preset  The task performed by one of these modules depends on the camera mode in play. The classic example is a main control dial. In Setup Phase, when working through menus, the control dial can scroll through submenus. If a Quick Menu is used in Prepare Phase, the control dial can navigate from one item to the next. In Shooting mode it will alter aperture in A, shutter speed in S, program shift in P and either shutter speed or aperture in M. In Review Phase  the main dial can scroll through images or perform other functions.
JOG Lever   Another example of mode dependent function could be a JOG type lever. This is a lever which can be pushed up/down/left/right in any direction and pushed inwards as well, for added functional capability. A well designed and positioned JOG lever is a great asset to an electronic camera. In all Phases of use there are equirements for an interface system to move something up/down/left/right. One way to achieve this is to press a button to start a process, then use a four way controller then press an OK or similar button to confirm the change. The JOG lever can eliminate most of this fiddling by directly controlling position at a touch.
Such a module can be used to navigate around a menu screen in Setup  or Prepare Phase.  It can provide size change (with one or two inward pushes) or lateral movement of AF position in Capture Phase,  enlarge then explore an image in Review Phase or start recording in Movie Mode.
* Mode dependent function, user assignable     The utility of mode dependent modules is increased if user assigned options are available.  For instance in P mode the dial could activate program shift or exposure compensation.
Juggling   This is something buskers do in the town square. Unfortunately many cameras require their users to do the same thing when attempting to operate the device. Too many cameras require the user to drop the unit down from the eye to make an adjustment. Too many demand that the user juggle the camera between right and left hand, changing grip with each in order to make adjustments to primary and secondary capture parameters.  In the electronic era there is simply no excuse for this.
Photo 2 Control Dial Index Finger 2
Unlabelled control dials, How many and where ?   Some cameras have no unlabelled mode dependent  control dials, some have three or more. Some cameras place one or more dials on top of the camera, some on the back. Some place these dials so they are operated by the right index finger, some by the thumb, some by both. There is obviously no general agreement about this matter at all. Contrast this with motor vehicles. Climb into almost any vehicle and you will find the major steer, go and stop interface modules are the same type in the same place.
Is it possible to identify an optimum arrangement of  camera interface modules, based on ergonomic principles, separate from any consideration of style, fashion, custom, or preference ? I believe the answer to that question is yes.
In Part 6 of this series I discussed hands and fingers. The ergonomic studies behind this discussion showed that in the case of a standard, generic, hand held camera of modern design the only body part not involved in holding and/or supporting the camera is the right index finger. It therefore follows that the right index finger is the best human asset for operating a shutter button and a main control dial.   
I have read opinions from some camera reviewers giving preference to a thumb operated control dial. The basis for this is usually a view that the index finger should be free to operate the shutter button. However my time and motion studies of camera operation show the index finger never has to operate a control dial and the shutter button at the same time. You never want to change, say, the lens aperture and depress the shutter button simultaneously.  A further problem is that in order to operate a control dial with the right thumb it is necessary to partly or completely release the right hand, thus destabilising the camera. A recent (February 2012) camera release [Sony NEX7] has no main mode dial, no control dial operated by the right index finger and no JOG lever. But it does have three unlabelled control dials each operated by the right thumb.  This system generates a lot of complaint on user forums about inadvertent activation of those rear dials. My concern about this system of user interface modules is more fundamental. My time and motion analysis of camera operation would suggest that the designers of this "three dial" approach have failed to grasp the fundamentals of ergonomics at a very basic hands and fingers level.
It is a general principle of functional design that a human machine interface will provide optimal control when it is provided with just enough interface modules (controls) to get the job done but no more.  My research with mockups, using the principle of  "form follows fingers" leads me to the view that the following basic arrangement of control modules will provide an effective human machine interface with minimal clutter and a very low rate of inadvertent module activation.
1. A "set and see"  Main Mode Dial with maker and user defined functions, and  a "set and see" Drive Mode dial or lever with maker and user defined functions. These two dials, located on the camera top,  provide instant visual feedback on major operating parameters which require adjustment in the Prepare Phase.
2. User configurable buttons with default functions for other parameters requiring adjustment in the Prepare Phase.  These would include AF Mode, Metering Mode,  Macro Setting,  White Balance, Quick Menu, Display options, Flash options,  and others by user allocation.
3. A JOG type lever immediately accessible to the right thumb without shifting grip with the right hand.  This needs  a comprehensive set of maker and user defined functions. This provides direct access to a range of functions in the Capture Phase and also Prepare and Review Phases.
4. One unlabelled main mode dial, operated by the right index finger and located immediately in front of or behind the shutter button,  provided that the dial is positioned and angled to match the natural movement of the index finger.
5. A cluster of four modules operated by the right index finger. These are Shutter Button, Main Mode Dial, ISO and Exposure Compensation, with some user configuration.
In Part 12 of this series I will show how this works on mockups.
Photographs 
Photo 1, Control Dial Index Finger 1   This camera has generally decent  ergonomics but the top area shown here could be improved. The main control dial is operated by the right index finger which is desirable. However the distance between the centre of the shutter button and the dial is 16 mm which is more than necessary for clearance between the two. The dial is parallel to the camera back which might look tidy but is suboptimal ergonomic practice as the finger which operates the dial falls across the upper part of the camera at an angle. Now see the ISO button behind the control dial. This is a further 12 mm back from the dial making a total distance of 28 mm from the shutter button to the ISO button. This is at or beyond the limit of side to side movement of the metacarpophalangeal joint for many people. Therefore those people will have to shift grip with the right hand to access the ISO button.  In addition the ISO button is one of four in a row, each the same size, with ISO only identified by a tiny little braille like nipple on top.
The rule with ergonomics should be "form follows fingers". This camera gets half way there but could easily have been much better.
Photo 2, Control Dial Index Finger 2  This is a substantially more compact camera but with the same basic relationship between the shutter button and the main control dial. The distance between them is 12 mm.  This is only 4 mm less than the distance in Photo 1, but it makes shifting from one to the other significantly easier. The index finger in the photo is lifted up so you can see both control modules. Again the control dial is neatly lined up parallel to the monitor even though it would better fit the finger which operates it if it were to be angled about 25 degrees.
Photo 3 Control Dial Thumb
Photo 3, Control Dial Thumb   This demonstrates what can happen when several ergonomic errors co exist. The main control dial is thumb actuated and the dial itself is almost fully submerged in the surrounding material. In consequence the only way to reliably operate the dial is to hold the thumb as shown here so the very tip of the digit, just below the fingernail, is the body part bearing on the dial. This forces the base of the thumb away from the camera and  breaks the opposition posture required to get a proper grip on the device. In addition a forceful push with the thumb is required to operate the dial so the index finger has to be removed from the shutter button and re positioned in front of the camera to resist the thumb pushing from the back.  This camera is the same size as the one in Photo 2, indicating the problems are due to poor ergonomic design and are not simply due to the small size of the camera. In this photo only the left hand and the middle finger of the right hand are preventing the camera from falling on the floor.
Photo 4 Back Button AF Start Good
Photo 4, Back Button AF Start Good    Back button AF start is desirable especially when following action as it separates AF activation from metering and capture.   This is the same camera as in Photo 1. The AF start button is obscured by the thumb. To activate AF the thumb has only to flex a few millimeters at the interphalangeal joint. Full grip on the camera is retained.
Photo 5, Back Button AF Start Bad   On this camera the AEL/AFL button can be configured to activate AF start/lock, which is desirable. However the button is incorrectly positioned.  The button is set forward of the plane of the monitor which prevents the unflexed thumb from reaching it at all.  In order to bring the thumb to bear on the button it has to be  flexed at the interphalangeal and metacarpo phalangeal joints. You can see in the photo this forces the base of the thumb and the palm of the right hand away from the camera, completely disrupting the grip.
Photo 5 Back Button AF Start Bad
This same camera has the main control dial located on the handle in front of and below the shutter button where it is covered by the middle finger as it grips the handle. The only way to access the control dial with the right index finger is by completely shifting grip with the right hand.     Note that this camera does have a well designed and located Main Mode Dial and Drive Mode  lever.
Photo 6 ISO Button Placement 
 
Photo 6, ISO Button placement   This is a 2012 professional camera release by a manufacturer with 76 years experience at making cameras. You can see the ISO button is on the top left of the camera. To change ISO the user has to drop the camera down from the eye, release the left hand from the lens, locate the ISO button by looking at it (the design makes it almost impossible for ordinary mortals to find it by touch) push the button with a finger of the left hand, return the left hand to the lens then scroll to the required ISO with the right thumb or index finger on the front or rear control dial. It would have been so easy to configure the red dot button behind the shutter button to activate ISO. Then the operator could control all the primary and secondary exposure parameters with the right index finger while looking through the viewfinder.