Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Study 2 data processing, analysis, figures #20

Open
4 tasks done
mayacakmak opened this issue Oct 29, 2020 · 6 comments
Open
4 tasks done

Study 2 data processing, analysis, figures #20

mayacakmak opened this issue Oct 29, 2020 · 6 comments

Comments

@mayacakmak
Copy link
Owner

mayacakmak commented Oct 29, 2020

@kaviMD Creating a separate thread for study 2 related data stuff. The first MTurk batch is not running so soon we should have some data to look at.

I know you've updated the data model a bit in on Firebase. Is the data processing code under se3/ also updated? What needs to be done there?

Data we'll want to extract from the logs:

  • Task completion time (we might actually want to look at these per task since the task order is kept constant and some tasks might highlight differences more)
  • Mouse clicks
  • Time spent in each view (not sure there will be differences across the interfaces, we might aggregate, but still useful to understand how se2->se3 worked out)
  • Questionnaire responses

Other ideas?

@kavidey
Copy link
Collaborator

kavidey commented Oct 30, 2020

Currently, there is no code for data processing under se3/. Most of the code for data processing under se2/ should work for se3 though.

The big things that need to be changed are where data is being downloaded from on firebase, and where the action list for each cycle is. Previously this was users/{uid}/sessions/{sid}/cycles/{cid}/events, for se3, the cycle metadata (target pose, total time, number of IK arm resets, etc.) is stored under se3/users/{uid}/sessions/{sid}/cycles/{cid}/ and the action lists for each cycle (clicks, view changes, etc.) is stored under se3/cycles/{cid}/.

I am working on updates to the SE2 data processing code right now, but I can start on SE3 when that is finished.

@mayacakmak
Copy link
Owner Author

Sounds good! we should have the first batch data in the databse for testing SE3 data processing.

@kavidey
Copy link
Collaborator

kavidey commented Oct 30, 2020

As of right now, all of the data processing scripts in se3/ except for process_data.py should be working with the SE3 data (there isn't enough data to get in-depth results right now. The user filtering was also disabled)

One thing that I'm not sure about is what the best way to calculate the distance between the pose of the target and the starting pose of the EE (position could be euclidean distance, but I don't know what metric to use for rotation). I'm also not sure of what the best way to calculate how "flexible" a target is. This could just be the value of the thinnest axis, but there might be a more advanced way to do it (does this change for box vs cylinder targets?).

Some notes:

  • A cylinder target will always have the same X and Y dimension (they represent the diameter of the target), and a longer Z dimension for the height.
  • A box target can have different values for all 3 axes. However (at least on the test page) X is always the smallest dimension, Y is the largest, and Z is in the middle
  • Right now, the .csv file only has the position of the target, not the starting position of the arm (which is constant). Just noting here that the starting position of the arm is: (5.511428117752075, 2.489123249053955, -4.494971823692322) and the starting rotation is (0, 0, 0).

Right now, the .csv file generated by json_to_csv.py should have:

  • Position (x, y, z)
  • Rotation (x, y, z)
  • Target dimension (x, y, z)
  • Number of times "reset arm pose" was used
  • Number of clicks & Dragging duration

Something else that might be interesting to add (but is not there right now) is the amount of time spent on each view, the number of times the view was switched, or the total number of clicks for each view

@mayacakmak
Copy link
Owner Author

Nice! @kaviMD I don't think we need to worry about numerically representing distance and size/flex of targets for Study2 (i.e. we won't try to recreate the scatter plots). Since it's only 5 tasks we can visually show what they are--what matters more directly is how much the gripper has to move/rotate to grasp it.. we asked participants to grasp a certain way and I think you set up the tasks nicely so the first two only requires translation, the latter three require rotation in increasing number of dimensions. The issue is we're not really enforcing this and I noticed it is sometimes possible to grasp the horizontal objects from the side. So basically we won't make any strong statements about how "far" "difficult" the tasks are, we just say what they are. But then perhaps it might be useful to report how much people changed position and how much they changed rotation for each task -- I'm pretty sure needing to change rotation adds to completion time/leads to more resets/etc.

And yes, it would be great to add:

  • amount of time spent on each view
  • the number of times the view was switched
  • (optional, not super important) clicks in each view

@kavidey
Copy link
Collaborator

kavidey commented Oct 30, 2020

json_to_csv.py is updated with the new view metrics.
process_data.py is integrated into the new se3 data, but most of the graphs that it was originally meant to create rely on distance metrics that we don't currently have calculated for se3

I uploaded .csv files generated from all the se3 data we have right now to the Google Drive folder:
Cycle Data: https://drive.google.com/file/d/1Q3ssA1NFlrm_nSFDvnzNUTRp75VysM3R/view?usp=sharing
Questionnaire Data: https://drive.google.com/file/d/1Ht0VgF-SFtJHcGT5f3pcmsIcxoRL-KBC/view?usp=sharing

@mayacakmak
Copy link
Owner Author

I'll go ahead and release more HITs now. Inspecting the questionnaire data, the quality seems much higher b/c I restricted 'location' to US only.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants