Videos often capture objects, their motion, and the interactions between different objects. Although real-world objects have physical properties associated with them, many of these properties (such as mass and coefficient of friction) are not captured directly by the imaging pipeline. However, these properties can be estimated by utilizing cues from relative object motion and the dynamics introduced by collisions. In this paper, we introduce a new video question answering task for reasoning about the implicit physical properties of objects in a scene, from videos. For this task, we introduce a dataset – CRIPP-VQA, which contains videos of objects in motion, annotated with hypothetical/counterfactual questions about the effect of actions (such as removing, adding, or replacing objects), questions about planning (choosing actions to perform in order to reach a particular goal), as well as descriptive questions about the visible properties of objects. We benchmark the performance of existing video question answering models on two test settings of CRIPP-VQA: \textiti.i.d. and an out-of-distribution setting which contains objects with values of mass, coefficient of friction, and initial velocities that are not seen in the training distribution. Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties (the focus of this paper) and explicit properties (the focus of prior work) of objects.
@inproceedings{patel2022cripp,title={{CRIPP-VQA}: Counterfactual Reasoning about Implicit Physical Properties via Video Question Answering},author={Patel, Maitreya and Gokhale, Tejas and Baral, Chitta and Yang, Yezhou},booktitle={EMNLP, Main Conference -- },year={2022},url={https://maitreyapatel.com/CRIPP-VQA/},}
Benchmarking generalization via in-context instructions on 1,600+ language tasks
How can we measure the generalization of models to a variety of unseen tasks when provided with their language instructions? To facilitate progress in this goal, we introduce Natural-Instructions v2, a benchmark of 1,600+ diverse language tasks and their expert-written instructions. It covers 70+ distinct task types, such as tagging, in-filling, and rewriting. These tasks are collected with contributions of NLP practitioners in the community and through an iterative peer review process to ensure their quality. With this large and diverse collection of tasks, we are able to rigorously benchmark cross-task generalization of models – training on a subset of tasks and evaluating on the remaining unseen ones. For instance, we quantify generalization as a function of various scaling parameters, such as the number of observed tasks, the number of instances, and model sizes. Based on these insights, we introduce Tk-Instruct, an encoder-decoder Transformer that is trained to follow a variety of in-context instructions (plain language task definitions or k-shot examples) which outperforms existing larger models on our benchmark. We hope this benchmark facilitates future progress toward more general-purpose language understanding models.
@inproceedings{wang2022benchmarking,title={Benchmarking generalization via in-context instructions on 1,600+ language tasks},author={Wang, Yizhong and Mishra, Swaroop and Alipoormolabashi, Pegah and Kordi, Yeganeh and Mirzaei, Amirreza and others},booktitle={EMNLP, Main Conference -- },year={2022},url={https://instructions.apps.allenai.org},}
2020
MSpeC-Net: Multi-Domain Speech Conversion Network
Harshit Malaviya, Jui Shah, Maitreya Patel, Jalansh Munshi, and Hemant A Patil
In 45th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
@inproceedings{malaviya2020mspec,title={{MSpeC-Net}: Multi-Domain Speech Conversion Network},author={Malaviya, Harshit and Shah, Jui and Patel, Maitreya and Munshi, Jalansh and Patil, Hemant A},booktitle={45th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},pages={7764--7768},year={2020},organization={IEEE},}
CinC-GAN for Effective F0 prediction for Whisper-to-Normal Speech Conversion
Maitreya Patel, Mirali Purohit, Jui Shah, and Hemant A Patil
In 28th European Signal Processing Conference (EUSIPCO) 2020
@inproceedings{patel2020cinc,title={{CinC-GAN} for Effective F0 prediction for Whisper-to-Normal Speech Conversion},author={Patel, Maitreya and Purohit, Mirali and Shah, Jui and Patil, Hemant A},booktitle={28th European Signal Processing Conference (EUSIPCO)},year={2020},organization={IEEE},}
Weak Speech Supervision: A case study of Dysarthria Severity Classification
@inproceedings{purohit2020weak,title={Weak Speech Supervision: A case study of Dysarthria Severity Classification},author={Purohit, Mirali and Parmar, Mihir and Patel, Maitreya and Malaviya, Harshit and Patil, Hemant A},booktitle={28th European Signal Processing Conference (EUSIPCO)},year={2020},organization={IEEE},}
2019
Novel adaptive generative adversarial network for voice conversion
@inproceedings{patel2019novel,title={Novel adaptive generative adversarial network for voice conversion},author={Patel, Maitreya and Parmar, Mihir and Doshi, Savan and Shah, Nirmesh J and Patil, Hemant A},booktitle={11th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)},pages={1273--1281},year={2019},organization={IEEE},}
Effectiveness of cross-domain architectures for whisper-to-normal speech conversion
@inproceedings{parmar2019effectiveness,title={Effectiveness of cross-domain architectures for whisper-to-normal speech conversion},author={Parmar, Mihir and Doshi, Savan and Shah, Nirmesh J and Patel, Maitreya and Patil, Hemant A},booktitle={27th European Signal Processing Conference (EUSIPCO)},year={2019},organization={IEEE},}