前面講了幾天用鍵盤反應的實驗,今天來講講 jsPsych 支援的一種反應方式:眼球追蹤。但因為我也還沒認真研究過這個功能,就只先介紹官網的內容。
我在學生時期上過眼球追蹤相關的課,以及做過相關的實驗。眼睛凝視的地方,可以很直覺地想成就是目前注意力所在的地方。因為眼睛構造的關係,人類清晰的視野只有正前方往旁邊幾度而已。再更大的角度雖然餘光看得到,但就是沒辦法很清楚的看到。
眼動儀的實驗,跟一般鍵盤反應的實驗不太一樣,還多了一個眼球校正的過程。因為人的眼球 / 眼鏡會反光,反光太嚴重的話,眼動儀會偵測不動眼球凝視的方向(印象中是偵測瞳孔)。光看文字很難想像眼動儀實驗的過程,我直接請中正心理系的教授來講解 XD
以下直接來看看要怎麼使用 jsPsych 來做眼球追蹤。
官方的教學文件在這
一般要做眼動儀實驗,會需要一台眼動儀放在電腦前面,但一般人的家裡不會有啊?那該怎麼辦呢?jsPsych 幫我們處理好了。
jsPsych 引用了一個叫 WebGazer.js 的套件。這個開源套件主要由 Brown 跟 Pomona 大學的研究者開發,並且支援用筆電的視訊鏡頭來當作眼動儀。超猛 der
然後這邊是 WebGazer.js 的官網,以及其中一個範例
這邊是我根據官方文件做的眼動 demo,可以先試玩看看~
這邊是從官網文件上複製下來的,眼球追蹤時實驗程式碼。我用註解的方式來說明
<!DOCTYPE html>
<html>
<head>
<!-- 載入 jsPsych 相關的函式厙 -->
<script src="https://unpkg.com/jspsych@7.3.3"></script>
<script src="https://unpkg.com/@jspsych/plugin-preload@1.1.2"></script>
<script src="https://unpkg.com/@jspsych/plugin-html-button-response@1.1.2"></script>
<script src="https://unpkg.com/@jspsych/plugin-html-keyboard-response@1.1.2"></script>
<script src="https://unpkg.com/@jspsych/plugin-image-keyboard-response@1.1.2"></script>
<script src="https://unpkg.com/@jspsych/plugin-webgazer-init-camera@1.0.2"></script>
<script src="https://unpkg.com/@jspsych/plugin-webgazer-calibrate@1.0.2"></script>
<script src="https://unpkg.com/@jspsych/plugin-webgazer-validate@1.0.2"></script>
<script src="https://cdn.jsdelivr.net/gh/jspsych/jspsych@jspsych@7.0.0/examples/js/webgazer/webgazer.js"></script>
<script src="https://unpkg.com/@jspsych/extension-webgazer@1.0.2"></script>
<link
rel="stylesheet"
href="https://unpkg.com/jspsych@7.3.3/css/jspsych.css"
/>
<style>
.jspsych-btn {
margin-bottom: 10px;
}
</style>
</head>
<body></body>
<script>
/**
* 初始化 jsPsych
* 並且載入眼動追蹤 webgazer 的擴充套件
*/
var jsPsych = initJsPsych({
extensions: [
{type: jsPsychExtensionWebgazer}
]
});
/**
* 預先載入圖片素材,這樣程式執行時不用再花時間下載
*/
var preload = {
type: jsPsychPreload,
images: ['img/blue.png']
}
/**
* 指導語
*/
var camera_instructions = {
type: jsPsychHtmlButtonResponse,
stimulus: `
<p>In order to participate you must allow the experiment to use your camera.</p>
<p>You will be prompted to do this on the next screen.</p>
<p>If you do not wish to allow use of your camera, you cannot participate in this experiment.<p>
<p>It may take up to 30 seconds for the camera to initialize after you give permission.</p>
`,
choices: ['Got it'],
}
/**
* 眼動實驗會需要做眼球校正。先叫視訊鏡頭出來 stand by
*/
var init_camera = {
type: jsPsychWebgazerInitCamera
}
/**
* 眼球校正階段的指導語
*/
var calibration_instructions = {
type: jsPsychHtmlButtonResponse,
stimulus: `
<p>Now you'll calibrate the eye tracking, so that the software can use the image of your eyes to predict where you are looking.</p>
<p>You'll see a series of dots appear on the screen. Look at each dot and click on it.</p>
`,
choices: ['Got it'],
}
/**
* 來校正囉~
* 眼睛會需要看螢幕上的五個點。以目前的凝視點為例,分別是左上、右上、中間、左下、右下
* 重複兩次、然後凝視順序隨機
*/
var calibration = {
type: jsPsychWebgazerCalibrate,
calibration_points: [
[25,25],[75,25],[50,50],[25,75],[75,75]
],
repetitions_per_point: 2,
randomize_calibration_order: true
}
/**
* 驗證階段指導語
*/
var validation_instructions = {
type: jsPsychHtmlButtonResponse,
stimulus: `
<p>Now we'll measure the accuracy of the calibration.</p>
<p>Look at each dot as it appears on the screen.</p>
<p style="font-weight: bold;">You do not need to click on the dots this time.</p>
`,
choices: ['Got it'],
post_trial_gap: 1000
}
/**
* 驗證階段
* 來看校正階段後,有沒有真的能讓 jsPsych 能偵測到眼球的移動,並定位在正確的螢幕位置上
* 以我個人經驗,校正跟驗證階段滿累人的(反光問題無法解決)
* 慘的情況可能要做到十分鐘以上、最慘也可能無法進到正式實驗階段
*/
var validation = {
type: jsPsychWebgazerValidate,
validation_points: [
[25,25],[75,25],[50,50],[25,75],[75,75]
],
roi_radius: 200,
time_to_saccade: 1000,
validation_duration: 2000,
data: {
task: 'validate'
}
}
var recalibrate_instructions = {
type: jsPsychHtmlButtonResponse,
stimulus: `
<p>The accuracy of the calibration is a little lower than we'd like.</p>
<p>Let's try calibrating one more time.</p>
<p>On the next screen, look at the dots and click on them.<p>
`,
choices: ['OK'],
}
/**
* 將剛才的校正、驗證階段整合在一起的語法
*/
var recalibrate = {
timeline: [recalibrate_instructions, calibration, validation_instructions, validation],
conditional_function: function(){
var validation_data = jsPsych.data.get().filter({task: 'validate'}).values()[0];
return validation_data.percent_in_roi.some(function(x){
var minimum_percent_acceptable = 50;
return x < minimum_percent_acceptable;
});
},
data: {
phase: 'recalibration'
}
}
/**
* 校正階段結束語
*/
var calibration_done = {
type: jsPsychHtmlButtonResponse,
stimulus: `
<p>Great, we're done with calibration!</p>
`,
choices: ['OK']
}
/**
* 實驗指導語
*/
var begin = {
type: jsPsychHtmlKeyboardResponse,
stimulus: `<p>The next screen will show an image to demonstrate adding the webgazer extension to a trial.</p>
<p>Just look at the image while eye tracking data is collected. The trial will end automatically.</p>
<p>Press any key to start.</p>
`
}
/**
* 實驗嘗試次的設定
* 我試做過一兩次。好像是 jsPsych 會記錄看到圖片過程的眼球移動連續位置
* 2000ms 過後、或點擊隨意按鍵後,就結束這個嘗試次
* 因為只有一個嘗試次,所以實驗結束
*/
var trial = {
type: jsPsychImageKeyboardResponse,
stimulus: 'img/blue.png',
choices: "NO_KEYS",
trial_duration: 2000,
extensions: [
{
type: jsPsychExtensionWebgazer,
params: {targets: ['#jspsych-image-keyboard-response-stimulus']}
}
]
}
/**
* 實驗完成後,要呈現什麼資料到螢幕上
*/
var show_data = {
type: jsPsychHtmlKeyboardResponse,
stimulus: function() {
var trial_data = jsPsych.data.getLastTrialData().values();
var trial_json = JSON.stringify(trial_data, null, 2);
return `<p style="margin-bottom:0px;"><strong>Trial data:</strong></p>
<pre style="margin-top:0px;text-align:left;">${trial_json}</pre>`;
},
choices: "NO_KEYS"
};
/**
* 安排各實驗階段的流程先後順序
*/
jsPsych.run([
preload,
camera_instructions,
init_camera,
calibration_instructions,
calibration,
validation_instructions,
validation,
recalibrate,
calibration_done,
begin,
trial,
show_data
]);
</script>
</html>
實驗數據如下。webgazer_data 欄位就是眼球移動的連續位置記錄,webgazer_targets 是刺激圖片的位置跟尺寸大小。兩者比較後,只要眼球的位置有在圖片內,就可以知道受試者是否有看到圖片。
{
"rt": null,
"stimulus": "img/blue.png",
"response": null,
"trial_type": "image-keyboard-response",
"trial_index": 4,
"time_elapsed": 30701,
"internal_node_id": "0.0-4.0",
"webgazer_data": [
{ "x": 1065, "y": 437, "t": 39},
{ "x": 943, "y": 377, "t": 79},
{ "x": 835, "y": 332, "t": 110},
{ "x": 731, "y": 299, "t": 146},
{ "x": 660, "y": 271, "t": 189},
{ "x": 606, "y": 251, "t": 238},
{ "x": 582, "y": 213, "t": 288},
{ "x": 551, "y": 200, "t": 335},
{ "x": 538, "y": 183, "t": 394},
{ "x": 514, "y": 177, "t": 436},
{ "x": 500, "y": 171, "t": 493},
{ "x": 525, "y": 178, "t": 542},
{ "x": 537, "y": 182, "t": 592},
{ "x": 543, "y": 178, "t": 633},
{ "x": 547, "y": 177, "t": 691},
{ "x": 558, "y": 174, "t": 739},
{ "x": 574, "y": 183, "t": 789},
{ "x": 577, "y": 197, "t": 838},
{ "x": 584, "y": 214, "t": 889},
{ "x": 603, "y": 218, "t": 937},
{ "x": 606, "y": 221, "t": 987}
],
"webgazer_targets": [
"#jspsych-image-keyboard-response-stimulus": {
"x": 490,
"y": 135,
"height": 300,
"width": 300,
"top": 135,
"bottom": 435,
"left": 490,
"right": 790
}
]
}
這邊是我根據官方文件做的眼動 demo,可以試玩看看~
我本來想用 jsPsych 做個眼動研究看看,但因為時間不太夠,以及我的 Windows 筆電好像偵測不到我的眼球 orz,所以先跳過。等我之後有空、跟拿到 mac 筆電後,再看要不要用加賽的方式做個眼動研究。
關於 jsPsych 的介紹就到今天。明天來談談也許你不需要用程式來寫程式。
我在閱讀官方文件時,發現官方的 demo 網址壞掉,害我 debug 好久。
於是我就凌晨三點幫別人的專案修 bug,然後把修復的結果貢獻回 jsPsych 的專案。結果官方的開發人員不到三分鐘就接收了這個貢獻,還跟我說謝謝 XD