丰满人妻一区二区三区…-婷婷99视频精品全部-99久久激情视频-国产欧美日韩www-亚洲av区一区二区-国产又大又猛又粗又黄的视频-精品人妻午夜中文字幕av-日韩精品人妻av在线-亚洲中文字幕乱码a,欧美 日韩中文字幕,蜜桃av精品一区二区三区在线,大屁股熟女精品视频一区二区

當(dāng)前位置:主頁(yè) > 產(chǎn)品展示 > ErgoVR虛擬人機(jī)交互測(cè)評(píng) > 人機(jī)工效虛擬系統(tǒng) > 可穿戴行走虛擬現(xiàn)實(shí)系統(tǒng)

可穿戴行走虛擬現(xiàn)實(shí)系統(tǒng)

型號(hào):ErgoLAB VR-MMES

產(chǎn)品時(shí)間:2022-02-11

簡(jiǎn)要描述:

可穿戴行走虛擬現(xiàn)實(shí)系統(tǒng)實(shí)現(xiàn)在進(jìn)行人機(jī)環(huán)境或者人類(lèi)心理行為研究時(shí)結(jié)合虛擬現(xiàn)實(shí)技術(shù),基于三維虛擬現(xiàn)實(shí)環(huán)境變化的情況下實(shí)時(shí)同步采集人-機(jī)-環(huán)境定量數(shù)據(jù)(包括如眼動(dòng)、腦波、呼吸、心律、脈搏、皮電、皮溫、心電、肌電、肢體動(dòng)作、關(guān)節(jié)角度、人體壓力、拉力、握力、捏力、振動(dòng)、噪聲、光照、大氣壓力、溫濕度等物理環(huán)境數(shù)據(jù))并進(jìn)行分析評(píng)價(jià),所獲取的定量結(jié)果為科學(xué)研究做客觀數(shù)據(jù)支撐。

詳細(xì)介紹

可穿戴行走虛擬現(xiàn)實(shí)系統(tǒng)由津發(fā)科技自主研發(fā)的ErgoLAB虛擬世界人機(jī)環(huán)境同步平臺(tái)、美國(guó)WorldViz頭戴式行走虛擬現(xiàn)實(shí)系統(tǒng)等核心部件組成,人機(jī)環(huán)境同步平臺(tái)由虛擬現(xiàn)實(shí)同步模塊、可穿戴生理記錄模塊、VR眼動(dòng)追蹤模塊、可穿戴腦電測(cè)量模塊、交互行為觀察模塊、生物力學(xué)測(cè)量模塊、環(huán)境測(cè)量模塊等組成。實(shí)現(xiàn)在進(jìn)行人機(jī)環(huán)境或者人類(lèi)心理行為研究時(shí)結(jié)合虛擬現(xiàn)實(shí)技術(shù),基于三維虛擬現(xiàn)實(shí)環(huán)境變化的情況下實(shí)時(shí)同步采集人-機(jī)-環(huán)境定量數(shù)據(jù)(包括如眼動(dòng)、腦波、呼吸、心律、脈搏、皮電、皮溫、心電、肌電、肢體動(dòng)作、關(guān)節(jié)角度、人體壓力、拉力、握力、捏力、振動(dòng)、噪聲、光照、大氣壓力、溫濕度等物理環(huán)境數(shù)據(jù))并進(jìn)行分析評(píng)價(jià),所獲取的定量結(jié)果為科學(xué)研究做客觀數(shù)據(jù)支撐。

可穿戴行走虛擬現(xiàn)實(shí)系統(tǒng)是一套沉浸感更強(qiáng)、交互體驗(yàn)更佳的*浸入式虛擬現(xiàn)實(shí)解決方案。它最大的特點(diǎn)就是系統(tǒng)部署簡(jiǎn)單便捷,極大的提高了虛擬現(xiàn)實(shí)應(yīng)用的靈活性。用小巧輕便的頭盔取代傳統(tǒng)的大屏顯示,不再局限于用戶(hù)場(chǎng)地的大小,擺脫外界環(huán)境的束縛??纱┐魈摂M現(xiàn)實(shí)頭盔(Head Mount Display,簡(jiǎn)稱(chēng)HMD)是一種頭戴式虛擬現(xiàn)實(shí)顯示設(shè)備。通過(guò)頭部佩戴的方式,全l方位覆蓋體驗(yàn)者視角,營(yíng)造出更加身臨其境的沉浸效果。同時(shí),輔以6自由度的頭部位置跟蹤和全身動(dòng)作捕捉設(shè)備,通過(guò)對(duì)體驗(yàn)者視點(diǎn)位置的捕捉,使頭盔顯示內(nèi)容進(jìn)行相應(yīng)的改變,應(yīng)用于單人及多人協(xié)同體驗(yàn)中,提升交互感和體驗(yàn)感。使用者佩戴上虛擬現(xiàn)實(shí)頭盔,全角度覆蓋體驗(yàn)視角,使虛擬和現(xiàn)實(shí)的界限融為一體。

作為該套系統(tǒng)方案的核心數(shù)據(jù)同步采集與分析平臺(tái),ErgoLAB人機(jī)環(huán)境同步平臺(tái)不僅支持虛擬現(xiàn)實(shí)環(huán)境,也支持基于真實(shí)世界的戶(hù)外現(xiàn)場(chǎng)研究、以及基于實(shí)驗(yàn)室基礎(chǔ)研究的實(shí)驗(yàn)室研究,可以在任意的實(shí)驗(yàn)環(huán)境下采集多元數(shù)據(jù)并進(jìn)行定量評(píng)價(jià)。(人機(jī)環(huán)境同步平臺(tái)含虛擬現(xiàn)實(shí)同步模塊、可穿戴生理記錄模塊、虛擬現(xiàn)實(shí)眼動(dòng)追蹤模塊、可穿戴腦電測(cè)量模塊、交互行為觀察模塊、生物力學(xué)測(cè)量模塊、環(huán)境測(cè)量模塊等組成)

作為該套系統(tǒng)方案的核心虛擬現(xiàn)實(shí)軟件引擎,WorldViz不僅支持虛擬現(xiàn)實(shí)頭盔,還可為用戶(hù)提供優(yōu)質(zhì)的應(yīng)用內(nèi)容。結(jié)合行走運(yùn)動(dòng)追蹤系統(tǒng)、虛擬人機(jī)交互系統(tǒng),使用者最終完成與虛擬場(chǎng)景及內(nèi)容的互動(dòng)交互操作。

應(yīng)用領(lǐng)域

BIM環(huán)境行為研究虛擬仿真實(shí)驗(yàn)室解決方案:建筑感性設(shè)計(jì)、環(huán)境行為、室內(nèi)設(shè)計(jì)、人居環(huán)境研究等;

交互設(shè)計(jì)虛擬仿真實(shí)驗(yàn)室解決方案:虛擬規(guī)劃、虛擬設(shè)計(jì)、虛擬裝配、虛擬評(píng)審、虛擬訓(xùn)練、設(shè)備狀態(tài)可視化等;

國(guó)防武l器裝備人機(jī)環(huán)境虛擬仿真實(shí)驗(yàn)室解決方案:武l器裝備人機(jī)環(huán)境系統(tǒng)工程研究以及軍事心理學(xué)應(yīng)用,軍事訓(xùn)練、軍事教育、作戰(zhàn)指揮、武l器研制與開(kāi)發(fā)等;

用戶(hù)體驗(yàn)與可用性研究虛擬仿真實(shí)驗(yàn)室方案:游戲體驗(yàn)、體驗(yàn)類(lèi)運(yùn)動(dòng)項(xiàng)目、影視類(lèi)娛樂(lè)、多人參與的娛樂(lè)項(xiàng)目。

虛擬購(gòu)物消費(fèi)行為研究實(shí)驗(yàn)室方案

安全人機(jī)與不安全行為虛擬仿真實(shí)驗(yàn)室方案

駕駛行為虛擬仿真實(shí)驗(yàn)室方案

人因工程與作業(yè)研究虛擬仿真實(shí)驗(yàn)室方案

其用戶(hù)遍布各個(gè)應(yīng)用領(lǐng)域,包括教育和心理、培訓(xùn)、建筑設(shè)計(jì)、軍事航天、醫(yī)療、娛樂(lè)、圖形建模等。同時(shí)該產(chǎn)品在認(rèn)知相關(guān)的科研領(lǐng)域更具競(jìng)爭(zhēng)力,在歐美和國(guó)內(nèi)高等學(xué)府和研究機(jī)構(gòu)擁有五百個(gè)以上用。

1)、加州大學(xué)圣巴巴拉分校虛擬環(huán)境與行為研究中心

該實(shí)驗(yàn)室主要致力于心理認(rèn)知相關(guān)的科學(xué)研究,包括社會(huì)心理學(xué)、視覺(jué)、空間認(rèn)知等,并有大量論文在國(guó)際知l名刊物發(fā)表,具體詳見(jiàn)論文列表。

2)、邁阿密大學(xué)心理與計(jì)算機(jī)科學(xué)實(shí)驗(yàn)室

研究領(lǐng)域:空間認(rèn)知

Human Spatial Cognition In his research Professor David Waller investigates how people learn and mentally represent spatial information about their environment. Wearing a head-mounted display and carrying a laptop-based dual pipe image generator in a backpack, users can wirelessly walk through extremely large computer generated virtual environments.

Research Project Examples Specificity of Spatial Memories When people learn about the locations of objects in a scene, what information gets represented in memory? For example, do people only remember what they saw, or do they commit more abstract information to memory? In two projects, we address these questions by examining how well people recognize perspectives of a scene that are similar but not identical to the views that they have learned. In a third project, we examine the reference frames that are used to code spatial information in memory. In a fourth project, we investigate whether the biases that people have in their memory for pictures also occur when they remember three-dimensional scenes.

Nonvisual Egocentric Spatial Updating When we walk through the environment, we realize that the objects we pass do not cease to exist just because they are out of sight (e.g. behind us). We stay oriented in this way because we spatially update (i.e., keep track of changes in our position and orientation relative to the environment.)

3)、加拿大滑鐵盧大學(xué)心理系

設(shè)備: WorldViz Vizard 3D software toolkit, WorldViz PPT H8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display, Arrington Eye Tracker

研究領(lǐng)域:行為科學(xué)

Professor Colin Ellard about his research: I am interested in how the organization and appearance of natural and built spaces affects movement, wayfinding, emotion and physiology. My approach to these questions is strongly multidisciplinary and is informed by collaborations with architects, artists, planners, and health professionals. Current studies include investigations of the psychology of residential design, wayfinding at the urban scale, restorative effects of exposure to natural settings, and comparative studies of defensive responses. My research methods include both field investigations and studies of human behavior in immersive virtual environments.

部分發(fā)表論文: Colin Ellard (2009). Where am I? Why we can find our way to the Moon but get lost in the mall. Toronto: Harper Collins Canada.

Journal Articles: Colin Ellard and Lori Wagar (2008). Plasticity of the association between visual space and action space in a blind-walking task. Perception, 37(7), 1044-1053.

Colin Ellard and Meghan Eller (2009). Spatial cognition in the gerbil: Computing optimal escape routes from visual threats. Animal Cognition, 12(2), 333-345.

Posters: Kevin Barton and Colin Ellard (2009). Finding your way: The influence of global spatial intelligibility and field-of-view on a wayfinding task. Poster session presented at the 9th annual meeting of the Vision Sciences Society, Naples, FL. (Link To Poster)

Brian Garrison and Colin Ellard (2009). The connection effect in the disconnect between peripersonal and extrapersonal space. Poster session presented at the 9th annual meeting of the Vision Sciences Society, Naples, FL. (Link To Poster)

4)、美國(guó)斯坦福大學(xué)信息學(xué)院虛擬人交互實(shí)驗(yàn)室

設(shè)備: WorldViz Vizard 3D software toolkit, WorldViz PPT X8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display, Complete Characters avatar package

The mission of the Virtual Human Interaction Lab is to understand the dynamics and implications of interactions among people in immersive virtual reality simulations (VR), and other forms of human digital representations in media, communication systems, and games. Researchers in the lab are most concerned with understanding the social interaction that occurs within the confines of VR, and the majority of our work is centered on using empirical, behavioral science methodologies to explore people as they interact in these digital worlds. However, oftentimes it is necessary to develop new gesture tracking systems, three-dimensional modeling techniques, or agent-behavior algorithms in order to answer these basic social questions. Consequently, we also engage in research geared towards developing new ways to produce these VR simulations.

Our research programs tend to fall under one of three larger questions:

      1. What new social issues arise from the use of immersive VR communication systems?

      2. How can VR be used as a basic research tool to study the nuances of face-to-face interaction?

      3. How can VR be applied to improve everyday life, such as legal practices, and communications systems.

5)、加州大學(xué)圣迭戈分校神經(jīng)科學(xué)實(shí)驗(yàn)室

設(shè)備: WorldViz Vizard 3D software toolkit, WorldViz PPT X8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display

The long-range objective of the laboratory is to better understand the neural bases of human sensorimotor control and learning. Our approach is to analyze normal motor control and learning processes, and the nature of the breakdown in those processes in patients with selective failure of specific sensory or motor systems of the brain. Toward this end, we have developed novel methods of imaging and graphic analysis of spatiotemporal patterns inherent in digital records of movement trajectories. We monitor movements of the limbs, body, head, and eyes, both in real environments and in 3D multimodal, immersive virtual environments, and recently have added synchronous recording of high-definition EEG. One domain of our studies is Parkinson's disease. Our studies have been dissecting out those elements of sensorimotor processing which may be most impaired in Parkinsonism, and those elements that may most crucially depend upon basal ganglia function and cannot be compensated for by other brain systems. Since skilled movement and learning may be considered opposite sides of the same coin, we also are investigating learning in Parkinson’s disease: how Parkinson’s patients learn to adapt their movements in altered sensorimotor environments; how their eye-hand coordination changes over the course of learning sequences; and how their neural dynamics are altered when learning to make decisions based on reward. Finally, we are examining the ability of drug versus deep brain stimulation therapies to ameliorate deficits in these functions.

 

產(chǎn)品咨詢(xún)

留言框

  • 產(chǎn)品:

  • 您的單位:

  • 您的姓名:

  • 聯(lián)系電話(huà):

  • 常用郵箱:

  • 省份:

  • 詳細(xì)地址:

  • 補(bǔ)充說(shuō)明:

  • 驗(yàn)證碼:

    請(qǐng)輸入計(jì)算結(jié)果(填寫(xiě)阿拉伯?dāng)?shù)字),如:三加四=7

人因工程與工效學(xué)

人機(jī)工程、人的失誤與系統(tǒng)安全、人機(jī)工效學(xué)、工作場(chǎng)所與工效學(xué)負(fù)荷等

安全人機(jī)工程

從安全的角度和著眼點(diǎn),運(yùn)用人機(jī)工程學(xué)的原理和方法去解決人機(jī)結(jié)合面安全問(wèn)題

交通安全與駕駛行為

人-車(chē)-路-環(huán)境系統(tǒng)的整體研究,有助于改善駕駛系統(tǒng)設(shè)計(jì)、提高駕駛安全性、改善道路環(huán)境等

用戶(hù)體驗(yàn)與交互設(shè)計(jì)

ErgoLAB可實(shí)現(xiàn)桌面端、移動(dòng)端以及VR虛擬環(huán)境中的眼動(dòng)、生理、行為等數(shù)據(jù)的采集,探索產(chǎn)品設(shè)計(jì)、人機(jī)交互對(duì)用戶(hù)體驗(yàn)的影響

建筑與環(huán)境行為

研究如何通過(guò)城市規(guī)劃與建筑設(shè)計(jì)來(lái)滿(mǎn)足人的行為心理需求,以創(chuàng)造良好環(huán)境,提高工作效率

消費(fèi)行為與神經(jīng)營(yíng)銷(xiāo)

通過(guò)ErgoLAB采集和分析消費(fèi)者的生理、表情、行為等數(shù)據(jù),了解消費(fèi)者的認(rèn)知加工與決策行為,找到消費(fèi)者行為動(dòng)機(jī),從而產(chǎn)生恰當(dāng)?shù)臓I(yíng)銷(xiāo)策略使消費(fèi)者產(chǎn)生留言意向及留言行為

掃一掃,加微信

版權(quán)所有 © 2026北京津發(fā)科技股份有限公司(m.xynet6.com)
備案號(hào):京ICP備14045309號(hào)-4 技術(shù)支持:智慧城市網(wǎng) 管理登陸 GoogleSitemap

男人的天堂在线91-精品国产乱码久久久久久免费流畅-日韩欧美一二三级电影-99免费视频观看在线 | 久久99久久99精品免视看-激情网五月婷婷-久久久久久久亚洲精品点影院-久久久久久久久久高清 | 日韩亚洲欧美中出-精品老熟女av一区二区三区-精品人妻一区二区三区11-欧美另类乱交视频 | 91精品国产九色综合久久香蕉-久久女人精品天堂av-久久热在线这里只有精品-国产超级精品色婷婷 | 97成人人人妻一区-久久这里只有精品女优视频-日韩精品黄色a v-av中文字幕在线看 | 日韩经典视频在线播放一区二区-欧美日韩一区二区三区免费视频-人妻精品免费一区二区三区四区-人妻少妇中文字幕二区 | 国产欧美人妻一区二区-久久 视频精品 在线观看-巨尻人妻一区二区-又粗又硬又长又大又黄又爽 | 久久久久视频在线观看-日本在线影视中文字幕-中文字幕一区二区三区四区谷希原-麻豆一级看片免费现在观看 | 99国产精品热99-久久久久久久伦理精品电影-日韩s3m8中文字幕-成人av天堂一区二区三区 | 日韩资源免费在线观看视频-日本偷拍久久久-色婷婷av一区二区三区粉嫩av-91精品综合久久久久久久久久久 | 在线日韩制服中文字幕-亚洲欧美日韩顶级片-日韩中文有码免费视频-一区二区三区四区高清av | 蜜臀久久精品一区二区三区-麻豆精品国产a-亚洲中文字幕熟女美腿丝袜-粉嫩av一区二区三区粉 | 日韩精品在线综合网-日韩福利在线视频观看-av黄片在线免费看-麻豆一区区二区三区 | 欧美日韩一级二级综合-久久久人妻狠狠操-欧美成人日韩精品-国产av高清一区二区三区 | 91精品国产高清极品美女-91精品人妻一区二区三区蜜桃成人-91在线一区二区三区四区五区六区-激情五月婷婷综合激情五月 | 超碰97国产在线公开-日本久久精品不卡视频-午夜av一区二区在线观看-久久最大的久久99 | 国产一二三四区自拍-久久99这里有精品99-久久精品久久精品伊人69-日韩不卡在线观看视频 | 最新日韩毛片网站-中文字幕日韩精品巨乳-成人欧美一区二区三区视频不卡-国产精品麻豆入口29 | 97人妻精品一区二区三区香蕉-国产精品麻豆免费观看-国产av剧情md精品麻豆-久久中文字幕高清 | 99久久人妻精品免费二区绿帽-av天堂手机版亚洲-久久精品国产亚洲av网-18禁国产精品久久久久欠 | 精品岛国产熟女人妻欲求不满-又粗又猛又爽黄老大爷视频-超碰在线字幕av-99热2这里只有精品 | 成人av精品免费在线观看-国产亚洲精品成人网-日韩激情视频在线播放-国产欧美日韩在线播放第47页 成人黄页网站免费观看-丰满人妻一区二区三区四-国产麻豆欧美视频在线观看-91插插插视频免费看 | 亚洲中文字幕日韩精品-69精品久久综合熟女蜜臀-日韩av高清在线观看第一区-色综合久综合久久综合久鬼88 | 91久久国产精品久久久久-婷婷在线中文字幕av-91中文字幕第一页欧美更新-精品国产乱码久久久久久绯色 | 成人精品1024欧美日韩-99日本精品久久久久久人妻-91极品粉嫩鲍鱼在线观看-久久九九99国产精品 | av在线一区二区三区不卡-精品久久久久久国产三级-五月婷婷激情在线免费视频-超碰人人干人人爽人人射 | 久久久亚洲ocean资源站-日韩av手机在线播-色哟哟国产精品观看-国产免费观看av大片的网站 | 日韩激情视频在线免费观看-国产99喷水在线观看-超碰在线免费地址-日韩欧美亚洲精品人妻 | 91中文字幕在线看-国产精品久久亚洲不卡-婷婷综合五月中文字幕-亚洲一区二区三区在线久久 | 久久久久国产99精品久久久-久久精品色妇熟妇丰满人妻在线观看-91国产中文字幕一区-熟女av四区二区三区 | 日韩av午夜在线-欧美高清一区二区三区四区-漂亮人妻被黑人一区二区三区-17c久久精品国产亚洲av蜜柚 | 日韩欧美怡红院-国精品一区二区在线-欧美丰满少妇一区二区三区-国产视频一区二区在线播放 | 精品人妻少妇一区二区三区不卡-国产亚洲欧美日韩男男网站-99久久夜色精品国产-亚洲精品久久久久久久久久久中文字幕 | 国产熟女人妻一区二区三区-91精品91久久久久-国产69精品久久777的观感-久久久一区二区三区亚洲 | 久久草免费福利视频-欧美日韩亚洲 一区-免费在线观看av中文字幕-青青草原综合久久大伊人精品评价 | 97超级碰撞免费在线观看完整版-久久99中文字幕伊人-中文字幕av在线天堂-69成人精品久久久 | 国产亚州色婷婷久久99精品91-欧美精品熟妇乱的视频-99久久99久久免费精-久久亚洲一区二区三区四区 | 日韩女优中文字幕在线观看-国产久久精品在线观看-成人区人妻精品一区二视频-日韩激情极品偷拍 | 中文字幕久久蜜桃臀-成人一区成人二区成人三区-亚洲欧洲欧洲亚洲精品-欧美精品中文字幕人妻色 | 麻豆成人免费视频一区二区-国产成人av福利资源-丰满老熟妇二区三区-日韩欧美中文字幕热 | 91的麻豆精品国产自产在线-中文字幕二区三区四区五区-国产精品人妻人伦a62v-亚洲欧美日韩不卡在线 |