Create 3D white point cloud video on Mac using 2 Kinects v1simpleopenni point cloud program with KinectHow to use PCL to filter point cloud data from kinectUsing Point Cloud Library to store Point Clouds from KinectMaking Kinect and Processing work on Mac.green or white screen with processing + kinectRecording Kinect point cloud data?Obstacle Avoidance with Point Cloud LibraryGetting Point Cloud Data from Kinect v2 to PCLSaving point cloud data into a filePoint cloud Library : Compatibility issue (Kinect//OpenNI/Visual Studio)

How does the math work for Perception checks?

What is the highest possible scrabble score for placing a single tile

Strong empirical falsification of quantum mechanics based on vacuum energy density

Why is so much work done on numerical verification of the Riemann Hypothesis?

Has any country ever had 2 former presidents in jail simultaneously?

Why is this estimator biased?

Did arcade monitors have same pixel aspect ratio as TV sets?

Does an advisor owe his/her student anything? Will an advisor keep a PhD student only out of pity?

Why "had" in "[something] we would have made had we used [something]"?

Is there an injective, monotonically increasing, strictly concave function from the reals, to the reals?

Can a Canadian Travel to the USA twice, less than 180 days each time?

Calculating total slots

What happens if you are holding an Iron Flask with a demon inside and walk into an Antimagic Field?

It grows, but water kills it

How does a computer interpret real numbers?

Is this toilet slogan correct usage of the English language?

Terse Method to Swap Lowest for Highest?

Why should universal income be universal?

The probability of Bus A arriving before Bus B

Fear of getting stuck on one programming language / technology that is not used in my country

Yosemite Fire Rings - What to Expect?

What does chmod -u do?

Can a stoichiometric mixture of oxygen and methane exist as a liquid at standard pressure and some (low) temperature?

How to fade a semiplane defined by line?



Create 3D white point cloud video on Mac using 2 Kinects v1


simpleopenni point cloud program with KinectHow to use PCL to filter point cloud data from kinectUsing Point Cloud Library to store Point Clouds from KinectMaking Kinect and Processing work on Mac.green or white screen with processing + kinectRecording Kinect point cloud data?Obstacle Avoidance with Point Cloud LibraryGetting Point Cloud Data from Kinect v2 to PCLSaving point cloud data into a filePoint cloud Library : Compatibility issue (Kinect//OpenNI/Visual Studio)













0















I've been trying to get a 3D rendition of a body for more than a month now and I'm extremely confused. I've been using processing (specifically modifying the point cloud example created by Daniel Schiffman) to create a point cloud from 1 Kinect however can't figure out how to combine clouds from 2 kinects on my Mac. Can I record 2 point clouds and then combine them later in other software? Any time I export the point clouds, the background solidifies to black. Here is the full code I'm using: `



import org.openkinect.freenect.*;
import org.openkinect.processing.*;
import codeanticode.syphon.*;

// Kinect Library object
Kinect kinect;
SyphonServer server;

// Angle for rotation
float r = radians(20);

// We'll use a lookup table so that we don't have to repeat the math over and over
float[] depthLookUp = new float[2048];

void setup()
// Rendering in P3D
//for tim
// size(2400, 400, P3D);
size(1280, 720, P3D);
kinect = new Kinect(this);
kinect.initDepth();
server = new SyphonServer(this, "Processing Syphon");

// Lookup table for all possible depth values (0 - 2047)
for (int i = 0; i < depthLookUp.length; i++)
depthLookUp[i] = rawDepthToMeters(i);



void draw()

background(0,0,0,.1);
// Get the raw depth as array of integers
int[] depth = kinect.getRawDepth();

// We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
int skip = 2;

// Translate and rotate
translate(width/2, height/2);
//rotateX(r);
rotateY(r);
//rotateZ(r);
// Nested for loop that initializes x and y pixels and, for those less than the
// maximum threshold and at every skiping point, the offset is caculated to map
// them on a plane instead of just a line
for (int x = 0; x < kinect.width; x += skip)
for (int y = 0; y < kinect.height; y += skip)
int offset = x + y*kinect.width;

// Convert kinect data to world xyz coordinate
int rawDepth = depth[offset];
PVector v = depthToWorld(x, y, rawDepth);

///if (v < kinect.width/2)
//stroke(204, 102, 0);
//
//stroke(0, 102, 0);
stroke(255,255,255);
pushMatrix();
// Scale up by 400
float factor = 500;
translate(v.x*factor, v.y*factor, factor-v.z*factor);
// Draw a point
point(0, 0);
popMatrix();



// Rotate
r += 0.00f;
server.sendScreen();

//saveFrame("output1/dancer_####.png");


// These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html
float rawDepthToMeters(int depthValue)
if (depthValue < 947)
return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161 ));

return 0.0f;


// Only needed to make sense of the ouput depth values from the kinect
PVector depthToWorld(int x, int y, int depthValue)

final double fx_d = 1.0 / 5.9421434211923247e+02;
final double fy_d = 1.0 / 5.9104053696870778e+02;
final double cx_d = 3.3930780975300314e+02;
final double cy_d = 2.4273913761751615e+02;

// Drawing the result vector to give each point its three-dimensional space
PVector result = new PVector();
double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue);
result.x = (float)((x - cx_d) * depth * fx_d);
result.y = (float)((y - cy_d) * depth * fy_d);
result.z = (float)(depth);
return result;
`









share|improve this question




























    0















    I've been trying to get a 3D rendition of a body for more than a month now and I'm extremely confused. I've been using processing (specifically modifying the point cloud example created by Daniel Schiffman) to create a point cloud from 1 Kinect however can't figure out how to combine clouds from 2 kinects on my Mac. Can I record 2 point clouds and then combine them later in other software? Any time I export the point clouds, the background solidifies to black. Here is the full code I'm using: `



    import org.openkinect.freenect.*;
    import org.openkinect.processing.*;
    import codeanticode.syphon.*;

    // Kinect Library object
    Kinect kinect;
    SyphonServer server;

    // Angle for rotation
    float r = radians(20);

    // We'll use a lookup table so that we don't have to repeat the math over and over
    float[] depthLookUp = new float[2048];

    void setup()
    // Rendering in P3D
    //for tim
    // size(2400, 400, P3D);
    size(1280, 720, P3D);
    kinect = new Kinect(this);
    kinect.initDepth();
    server = new SyphonServer(this, "Processing Syphon");

    // Lookup table for all possible depth values (0 - 2047)
    for (int i = 0; i < depthLookUp.length; i++)
    depthLookUp[i] = rawDepthToMeters(i);



    void draw()

    background(0,0,0,.1);
    // Get the raw depth as array of integers
    int[] depth = kinect.getRawDepth();

    // We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
    int skip = 2;

    // Translate and rotate
    translate(width/2, height/2);
    //rotateX(r);
    rotateY(r);
    //rotateZ(r);
    // Nested for loop that initializes x and y pixels and, for those less than the
    // maximum threshold and at every skiping point, the offset is caculated to map
    // them on a plane instead of just a line
    for (int x = 0; x < kinect.width; x += skip)
    for (int y = 0; y < kinect.height; y += skip)
    int offset = x + y*kinect.width;

    // Convert kinect data to world xyz coordinate
    int rawDepth = depth[offset];
    PVector v = depthToWorld(x, y, rawDepth);

    ///if (v < kinect.width/2)
    //stroke(204, 102, 0);
    //
    //stroke(0, 102, 0);
    stroke(255,255,255);
    pushMatrix();
    // Scale up by 400
    float factor = 500;
    translate(v.x*factor, v.y*factor, factor-v.z*factor);
    // Draw a point
    point(0, 0);
    popMatrix();



    // Rotate
    r += 0.00f;
    server.sendScreen();

    //saveFrame("output1/dancer_####.png");


    // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html
    float rawDepthToMeters(int depthValue)
    if (depthValue < 947)
    return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161 ));

    return 0.0f;


    // Only needed to make sense of the ouput depth values from the kinect
    PVector depthToWorld(int x, int y, int depthValue)

    final double fx_d = 1.0 / 5.9421434211923247e+02;
    final double fy_d = 1.0 / 5.9104053696870778e+02;
    final double cx_d = 3.3930780975300314e+02;
    final double cy_d = 2.4273913761751615e+02;

    // Drawing the result vector to give each point its three-dimensional space
    PVector result = new PVector();
    double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue);
    result.x = (float)((x - cx_d) * depth * fx_d);
    result.y = (float)((y - cy_d) * depth * fy_d);
    result.z = (float)(depth);
    return result;
    `









    share|improve this question


























      0












      0








      0








      I've been trying to get a 3D rendition of a body for more than a month now and I'm extremely confused. I've been using processing (specifically modifying the point cloud example created by Daniel Schiffman) to create a point cloud from 1 Kinect however can't figure out how to combine clouds from 2 kinects on my Mac. Can I record 2 point clouds and then combine them later in other software? Any time I export the point clouds, the background solidifies to black. Here is the full code I'm using: `



      import org.openkinect.freenect.*;
      import org.openkinect.processing.*;
      import codeanticode.syphon.*;

      // Kinect Library object
      Kinect kinect;
      SyphonServer server;

      // Angle for rotation
      float r = radians(20);

      // We'll use a lookup table so that we don't have to repeat the math over and over
      float[] depthLookUp = new float[2048];

      void setup()
      // Rendering in P3D
      //for tim
      // size(2400, 400, P3D);
      size(1280, 720, P3D);
      kinect = new Kinect(this);
      kinect.initDepth();
      server = new SyphonServer(this, "Processing Syphon");

      // Lookup table for all possible depth values (0 - 2047)
      for (int i = 0; i < depthLookUp.length; i++)
      depthLookUp[i] = rawDepthToMeters(i);



      void draw()

      background(0,0,0,.1);
      // Get the raw depth as array of integers
      int[] depth = kinect.getRawDepth();

      // We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
      int skip = 2;

      // Translate and rotate
      translate(width/2, height/2);
      //rotateX(r);
      rotateY(r);
      //rotateZ(r);
      // Nested for loop that initializes x and y pixels and, for those less than the
      // maximum threshold and at every skiping point, the offset is caculated to map
      // them on a plane instead of just a line
      for (int x = 0; x < kinect.width; x += skip)
      for (int y = 0; y < kinect.height; y += skip)
      int offset = x + y*kinect.width;

      // Convert kinect data to world xyz coordinate
      int rawDepth = depth[offset];
      PVector v = depthToWorld(x, y, rawDepth);

      ///if (v < kinect.width/2)
      //stroke(204, 102, 0);
      //
      //stroke(0, 102, 0);
      stroke(255,255,255);
      pushMatrix();
      // Scale up by 400
      float factor = 500;
      translate(v.x*factor, v.y*factor, factor-v.z*factor);
      // Draw a point
      point(0, 0);
      popMatrix();



      // Rotate
      r += 0.00f;
      server.sendScreen();

      //saveFrame("output1/dancer_####.png");


      // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html
      float rawDepthToMeters(int depthValue)
      if (depthValue < 947)
      return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161 ));

      return 0.0f;


      // Only needed to make sense of the ouput depth values from the kinect
      PVector depthToWorld(int x, int y, int depthValue)

      final double fx_d = 1.0 / 5.9421434211923247e+02;
      final double fy_d = 1.0 / 5.9104053696870778e+02;
      final double cx_d = 3.3930780975300314e+02;
      final double cy_d = 2.4273913761751615e+02;

      // Drawing the result vector to give each point its three-dimensional space
      PVector result = new PVector();
      double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue);
      result.x = (float)((x - cx_d) * depth * fx_d);
      result.y = (float)((y - cy_d) * depth * fy_d);
      result.z = (float)(depth);
      return result;
      `









      share|improve this question
















      I've been trying to get a 3D rendition of a body for more than a month now and I'm extremely confused. I've been using processing (specifically modifying the point cloud example created by Daniel Schiffman) to create a point cloud from 1 Kinect however can't figure out how to combine clouds from 2 kinects on my Mac. Can I record 2 point clouds and then combine them later in other software? Any time I export the point clouds, the background solidifies to black. Here is the full code I'm using: `



      import org.openkinect.freenect.*;
      import org.openkinect.processing.*;
      import codeanticode.syphon.*;

      // Kinect Library object
      Kinect kinect;
      SyphonServer server;

      // Angle for rotation
      float r = radians(20);

      // We'll use a lookup table so that we don't have to repeat the math over and over
      float[] depthLookUp = new float[2048];

      void setup()
      // Rendering in P3D
      //for tim
      // size(2400, 400, P3D);
      size(1280, 720, P3D);
      kinect = new Kinect(this);
      kinect.initDepth();
      server = new SyphonServer(this, "Processing Syphon");

      // Lookup table for all possible depth values (0 - 2047)
      for (int i = 0; i < depthLookUp.length; i++)
      depthLookUp[i] = rawDepthToMeters(i);



      void draw()

      background(0,0,0,.1);
      // Get the raw depth as array of integers
      int[] depth = kinect.getRawDepth();

      // We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
      int skip = 2;

      // Translate and rotate
      translate(width/2, height/2);
      //rotateX(r);
      rotateY(r);
      //rotateZ(r);
      // Nested for loop that initializes x and y pixels and, for those less than the
      // maximum threshold and at every skiping point, the offset is caculated to map
      // them on a plane instead of just a line
      for (int x = 0; x < kinect.width; x += skip)
      for (int y = 0; y < kinect.height; y += skip)
      int offset = x + y*kinect.width;

      // Convert kinect data to world xyz coordinate
      int rawDepth = depth[offset];
      PVector v = depthToWorld(x, y, rawDepth);

      ///if (v < kinect.width/2)
      //stroke(204, 102, 0);
      //
      //stroke(0, 102, 0);
      stroke(255,255,255);
      pushMatrix();
      // Scale up by 400
      float factor = 500;
      translate(v.x*factor, v.y*factor, factor-v.z*factor);
      // Draw a point
      point(0, 0);
      popMatrix();



      // Rotate
      r += 0.00f;
      server.sendScreen();

      //saveFrame("output1/dancer_####.png");


      // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html
      float rawDepthToMeters(int depthValue)
      if (depthValue < 947)
      return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161 ));

      return 0.0f;


      // Only needed to make sense of the ouput depth values from the kinect
      PVector depthToWorld(int x, int y, int depthValue)

      final double fx_d = 1.0 / 5.9421434211923247e+02;
      final double fy_d = 1.0 / 5.9104053696870778e+02;
      final double cx_d = 3.3930780975300314e+02;
      final double cy_d = 2.4273913761751615e+02;

      // Drawing the result vector to give each point its three-dimensional space
      PVector result = new PVector();
      double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue);
      result.x = (float)((x - cx_d) * depth * fx_d);
      result.y = (float)((y - cy_d) * depth * fy_d);
      result.z = (float)(depth);
      return result;
      `






      java macos processing kinect point-cloud-library






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 7 at 6:43









      Mr. Semicolon

      514422




      514422










      asked Mar 7 at 6:34









      IshiIshi

      11




      11






















          0






          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55037464%2fcreate-3d-white-point-cloud-video-on-mac-using-2-kinects-v1%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55037464%2fcreate-3d-white-point-cloud-video-on-mac-using-2-kinects-v1%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          1928 у кіно

          Захаров Федір Захарович

          Ель Греко