soc-camera.rst 7.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169
  1. The Soc-Camera Drivers
  2. ======================
  3. Author: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
  4. Terminology
  5. -----------
  6. The following terms are used in this document:
  7. - camera / camera device / camera sensor - a video-camera sensor chip, capable
  8. of connecting to a variety of systems and interfaces, typically uses i2c for
  9. control and configuration, and a parallel or a serial bus for data.
  10. - camera host - an interface, to which a camera is connected. Typically a
  11. specialised interface, present on many SoCs, e.g. PXA27x and PXA3xx, SuperH,
  12. AVR32, i.MX27, i.MX31.
  13. - camera host bus - a connection between a camera host and a camera. Can be
  14. parallel or serial, consists of data and control lines, e.g. clock, vertical
  15. and horizontal synchronization signals.
  16. Purpose of the soc-camera subsystem
  17. -----------------------------------
  18. The soc-camera subsystem initially provided a unified API between camera host
  19. drivers and camera sensor drivers. Later the soc-camera sensor API has been
  20. replaced with the V4L2 standard subdev API. This also made camera driver re-use
  21. with non-soc-camera hosts possible. The camera host API to the soc-camera core
  22. has been preserved.
  23. Soc-camera implements a V4L2 interface to the user, currently only the "mmap"
  24. method is supported by host drivers. However, the soc-camera core also provides
  25. support for the "read" method.
  26. The subsystem has been designed to support multiple camera host interfaces and
  27. multiple cameras per interface, although most applications have only one camera
  28. sensor.
  29. Existing drivers
  30. ----------------
  31. As of 3.7 there are seven host drivers in the mainline: atmel-isi.c,
  32. mx1_camera.c (broken, scheduled for removal), mx2_camera.c, mx3_camera.c,
  33. omap1_camera.c, pxa_camera.c, sh_mobile_ceu_camera.c, and multiple sensor
  34. drivers under drivers/media/i2c/soc_camera/.
  35. Camera host API
  36. ---------------
  37. A host camera driver is registered using the
  38. .. code-block:: none
  39. soc_camera_host_register(struct soc_camera_host *);
  40. function. The host object can be initialized as follows:
  41. .. code-block:: none
  42. struct soc_camera_host *ici;
  43. ici->drv_name = DRV_NAME;
  44. ici->ops = &camera_host_ops;
  45. ici->priv = pcdev;
  46. ici->v4l2_dev.dev = &pdev->dev;
  47. ici->nr = pdev->id;
  48. All camera host methods are passed in a struct soc_camera_host_ops:
  49. .. code-block:: none
  50. static struct soc_camera_host_ops camera_host_ops = {
  51. .owner = THIS_MODULE,
  52. .add = camera_add_device,
  53. .remove = camera_remove_device,
  54. .set_fmt = camera_set_fmt_cap,
  55. .try_fmt = camera_try_fmt_cap,
  56. .init_videobuf2 = camera_init_videobuf2,
  57. .poll = camera_poll,
  58. .querycap = camera_querycap,
  59. .set_bus_param = camera_set_bus_param,
  60. /* The rest of host operations are optional */
  61. };
  62. .add and .remove methods are called when a sensor is attached to or detached
  63. from the host. .set_bus_param is used to configure physical connection
  64. parameters between the host and the sensor. .init_videobuf2 is called by
  65. soc-camera core when a video-device is opened, the host driver would typically
  66. call vb2_queue_init() in this method. Further video-buffer management is
  67. implemented completely by the specific camera host driver. If the host driver
  68. supports non-standard pixel format conversion, it should implement a
  69. .get_formats and, possibly, a .put_formats operations. See below for more
  70. details about format conversion. The rest of the methods are called from
  71. respective V4L2 operations.
  72. Camera API
  73. ----------
  74. Sensor drivers can use struct soc_camera_link, typically provided by the
  75. platform, and used to specify to which camera host bus the sensor is connected,
  76. and optionally provide platform .power and .reset methods for the camera. This
  77. struct is provided to the camera driver via the I2C client device platform data
  78. and can be obtained, using the soc_camera_i2c_to_link() macro. Care should be
  79. taken, when using soc_camera_vdev_to_subdev() and when accessing struct
  80. soc_camera_device, using v4l2_get_subdev_hostdata(): both only work, when
  81. running on an soc-camera host. The actual camera driver operation is implemented
  82. using the V4L2 subdev API. Additionally soc-camera camera drivers can use
  83. auxiliary soc-camera helper functions like soc_camera_power_on() and
  84. soc_camera_power_off(), which switch regulators, provided by the platform and call
  85. board-specific power switching methods. soc_camera_apply_board_flags() takes
  86. camera bus configuration capability flags and applies any board transformations,
  87. e.g. signal polarity inversion. soc_mbus_get_fmtdesc() can be used to obtain a
  88. pixel format descriptor, corresponding to a certain media-bus pixel format code.
  89. soc_camera_limit_side() can be used to restrict beginning and length of a frame
  90. side, based on camera capabilities.
  91. VIDIOC_S_CROP and VIDIOC_S_FMT behaviour
  92. ----------------------------------------
  93. Above user ioctls modify image geometry as follows:
  94. VIDIOC_S_CROP: sets location and sizes of the sensor window. Unit is one sensor
  95. pixel. Changing sensor window sizes preserves any scaling factors, therefore
  96. user window sizes change as well.
  97. VIDIOC_S_FMT: sets user window. Should preserve previously set sensor window as
  98. much as possible by modifying scaling factors. If the sensor window cannot be
  99. preserved precisely, it may be changed too.
  100. In soc-camera there are two locations, where scaling and cropping can take
  101. place: in the camera driver and in the host driver. User ioctls are first passed
  102. to the host driver, which then generally passes them down to the camera driver.
  103. It is more efficient to perform scaling and cropping in the camera driver to
  104. save camera bus bandwidth and maximise the framerate. However, if the camera
  105. driver failed to set the required parameters with sufficient precision, the host
  106. driver may decide to also use its own scaling and cropping to fulfill the user's
  107. request.
  108. Camera drivers are interfaced to the soc-camera core and to host drivers over
  109. the v4l2-subdev API, which is completely functional, it doesn't pass any data.
  110. Therefore all camera drivers shall reply to .g_fmt() requests with their current
  111. output geometry. This is necessary to correctly configure the camera bus.
  112. .s_fmt() and .try_fmt() have to be implemented too. Sensor window and scaling
  113. factors have to be maintained by camera drivers internally. According to the
  114. V4L2 API all capture drivers must support the VIDIOC_CROPCAP ioctl, hence we
  115. rely on camera drivers implementing .cropcap(). If the camera driver does not
  116. support cropping, it may choose to not implement .s_crop(), but to enable
  117. cropping support by the camera host driver at least the .g_crop method must be
  118. implemented.
  119. User window geometry is kept in .user_width and .user_height fields in struct
  120. soc_camera_device and used by the soc-camera core and host drivers. The core
  121. updates these fields upon successful completion of a .s_fmt() call, but if these
  122. fields change elsewhere, e.g. during .s_crop() processing, the host driver is
  123. responsible for updating them.
  124. Format conversion
  125. -----------------
  126. V4L2 distinguishes between pixel formats, as they are stored in memory, and as
  127. they are transferred over a media bus. Soc-camera provides support to
  128. conveniently manage these formats. A table of standard transformations is
  129. maintained by soc-camera core, which describes, what FOURCC pixel format will
  130. be obtained, if a media-bus pixel format is stored in memory according to
  131. certain rules. E.g. if MEDIA_BUS_FMT_YUYV8_2X8 data is sampled with 8 bits per
  132. sample and stored in memory in the little-endian order with no gaps between
  133. bytes, data in memory will represent the V4L2_PIX_FMT_YUYV FOURCC format. These
  134. standard transformations will be used by soc-camera or by camera host drivers to
  135. configure camera drivers to produce the FOURCC format, requested by the user,
  136. using the VIDIOC_S_FMT ioctl(). Apart from those standard format conversions,
  137. host drivers can also provide their own conversion rules by implementing a
  138. .get_formats and, if required, a .put_formats methods.