wiki:CameraObjects

Version 7 (modified by madams, 10 years ago) ( diff )

BackLinksMenu()

Camera Objects

Camera Objects can be created within the ProblemModuleInit routine in problem.f90. To create Cameras you first need to add two USE statements to your problem.f90

  USE Cameras
  USE Fields

Then in ProblemModuleInit declare a variable pointer of type CameraDef

  TYPE(CameraDef), POINTER :: Camera

Then create the Camera and set the various parameters as in the following example

    CALL CreateCamera(Camera)
    Camera%pos = (/4d0,-10d0,4d0/)
    Camera%UpVector = (/0d0,0d0,1d0/)
    Camera%Focus = (/4d0,4d0,4d0/)
    Camera%FOV = (/30d0,3d0/)

Here is a full list of the various Camera parameters with the default values in brackets:

  
  REAL(KIND=qPREC), DIMENSION(3) :: pos = DEFAULTCAMERAPOS ! Will choose a point at a distance in -y to see the entire domain.
  REAL(KIND=qPREC), DIMENSION(3) :: UpVector=(/0d0,0d0,1d0/)
  REAL(KIND=qPREC), DIMENSION(3) :: Focus = CENTER  !Will select the center of the simulation domain
  REAL(KIND=qPREC) :: FOV= (/30d0, 30d0/)
  • Given the camera position and the focus , we can construct the camera normal vector
  • We can then construct the true up vector by making the up vector orthogonal to the camera vector.

and then normalizing

  • We can then can get the horizontal vector by crossing the camera vector with the up vector
  • Then any given spatial point can be mapped to the image plane by projecting the vector onto the and vectors.
  • The volume of a given cell may all be within one pixel - in which the contribution to that pixel is just the amount of material in the cell… But when the cell gets large enough so that it's volume maps to more than one pixel - things get complicated. At that point it becomes better to sample which pixels pass through the cell instead - and calculate the optical depth through the cell… A given cell will have a projected angular size of ~ . Each pixel will subtend an angle of approximately . So when , the method should switch. If we sub sample coarse grids to the same resolution, and we have a min camera distance d', then we can choose a resolution for the projection … Alternatively we can choose the sub-sampling rate so that
  • To avoid information loss, the resolution of the image should be

Clearly there are still some aliasing effects. The routine that bins a cell into an image pixel is given below:

   SUBROUTINE BinCell(Camera, pos, dx, data, rho)
      TYPE(CameraDef), POINTER :: Camera
      REAL(KIND=qPREC) :: pos(3),xpos(3)
      REAL(KIND=qPREC) :: dx, ddx, xlower(3)
      REAL(KIND=qPREC), DIMENSION(:,:) :: data
      REAL(KIND=qPREC) :: rho
      INTEGER :: ipos(2), sample_res, i, j, k
      xlower=pos-half*dx
      sample_res=2d0*ceiling(camera%FOV(1)/camera%res / (dx/sqrt(sum(pos-Camera%pos)**2)))
      ddx=dx/REAL(sample_res)
      xlower=xlower-half*ddx
      DO i=1,sample_res
         xpos(1)=xlower(1)+ddx*i
         DO j=1,sample_res
            xpos(2)=xlower(2)+ddx*j
            DO k=1,sample_res
               xpos(3)=xlower(3)+ddx*k
               ipos(1:2)=nint(GetPos(Camera, xpos)*shape(data))
               IF (ALL(ipos(1:2) >= 1) .AND. ALL(ipos(1:2) <= shape(data))) THEN
                  data(ipos(1),ipos(2))=data(ipos(1),ipos(2))+ rho*ddx**3
               END IF
            END DO
         END DO
      END DO
   END SUBROUTINE BinCell

We could in principal continue to raise the sample_res but it gets computationally expensive… A better approach would be to sample along each ray at some interval… Then the sampling is pixel based instead of volume based - and should reduce the aliasing… Or we could calculate the optical depth across each cell for each ray that intersects the cell. Given that our mesh is cartesian this is not as hard as it might sound.

  • Find what rays could intersect the cell
  • For each ray find what which two faces the ray intersects and the corresponding points of entry and exit
  • The contribution to the pixel will be the the value of the cell times the distance between those points.
   SUBROUTINE BinCell(Camera, pos, dx, data, rho)
      TYPE(CameraDef), POINTER :: Camera
      REAL(KIND=qPREC) :: pos(3),xpos(3)
      REAL(KIND=qPREC) :: dx, ddx, xbounds(3,2), my_pixel(2)
      REAL(KIND=qPREC), DIMENSION(:,:) :: data
      REAL(KIND=qPREC) :: rho,a,intersection(6,3), ray(3), max_distance
      INTEGER :: ipos(2), sample_res, i, j, k, npoints, min_pixels(2), max_pixels(2), pixel(2), dim, odim(2), edge
      xbounds(:,1)=(pos-half*dx)
      xbounds(:,2)=(pos+half*dx)
      min_pixels=huge(min_pixels(1))
      max_pixels=0
      DO i=1,2
         DO j=1,2
            DO k=1,2
               my_pixel=GetPixel(Camera, (/xbounds(1,i), xbounds(2,j), xbounds(3,k)/))*shape(data)+half
               min_pixels=max(1,min(min_pixels, floor(my_pixel)))
               max_pixels=min(shape(data),max(max_pixels, ceiling(my_pixel)))
            END DO
         END DO
      END DO
      DO i=min_pixels(1), max_pixels(1)
         DO j=min_pixels(2), max_pixels(2)
            pixel=(/i,j/)-half
            Ray=GetRay(Camera, REAL(pixel,KIND=qPREC)/REAL(shape(data), KIND=qPREC))
            npoints=0
            DO dim=1,3
               DO edge=1,2
                  ! Camera%pos(dim)+a*ray(dim)=xbounds(dim,edge)
                  a=(xbounds(dim,edge)-Camera%pos(dim))/ray(dim)
                  xpos=Camera%pos+a*ray
                  odim=modulo((/dim,dim+1/),3)+1
                  IF (ALL(xpos(odim) >= xbounds(odim,1) .AND. xpos(odim) <= xbounds(odim,2))) THEN
                     npoints=npoints+1
                     intersection(npoints,:)=xpos
                  END IF
               END DO
            END DO
            IF (npoints == 0) CYCLE
            max_distance=0d0
            DO k=1,npoints
               max_distance=max(max_distance, sqrt(sum((intersection(k,:)-intersection(1,:))**2)))
            END DO
            data(i,j)=data(i,j)+rho*max_distance
         END DO
      END DO
   END SUBROUTINE BinCell

An Example Problem

Attachments (11)

Note: See TracWiki for help on using the wiki.