Egocentric video is increasing in importance since the emergence of wearable devices, such as Google Glass and Hololens. In addition, predicting a target object is important as a smart wearable system and a smart robot system are required nowadays. In this study, we predicted a target object that will interact with humans and measured the time that we can predict it. We successfully built computational models using a prototype theory for human prediction of the target object, which can measure both the time interval and error rate in egocentric and third person view video. With these computational models and results of the human experiment, we compared the performance of the results of egocentric video and third person view video. Through this, we found empirical support that egocentric video has better performance than third person view video in the human experiment and computational model. Specifically, it was 1.77 and 3.43 times better respectively, signifying that egocentric video is a better platform than third person view video when predicting the future target object.