Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (2024)

\useunder

\ul

  • Citation

    G. AlRegib, M. Prabhushankar, K. Kokilepersaud, P. Chowdhury, Z. Fowler, S. Trejo Corona, L. Thomaz, A. Majumdar, “Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition,” in IEEE Signal Processing Magazine, 2024.

  • Review

    Date of Acceptance: May 12 2024

  • Codes

    https://github.com/olivesgatech/VIPCUP2023_OLIVES.git

  • Bib

    @ARTICLE{alregib2024ophthalmic,
    author={G. AlRegib, M. Prabhushankar, K. Kokilepersaud, P. Chowdhury, Z. Fowler, S. Trejo Corona, L. Thomaz, A. Majumdar},
    journal={IEEE Signal Processing Magazine},
    title={Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition},
    year={2024}}

  • VIP

    https://alregib.ece.gatech.edu/2023-vip-cup/

  • Contact

    olives.gatech@gmail.com
    https://ghassanalregib.info/

Ghassan AlRegib, Mohit Prabhushankar, Kiran Kokilepersaud, Prithwijit Chowdhury, Zoe Fowler, Stephanie Trejo Corona, Lucas Thomaz, and Angshul Majumdar

I Introduction

Ophthalmic clinical trials study treatment efficacy and are performed with a specific purpose and set of procedures. Generally, this purpose is to assess one of two variables. The first variable is the control biomarker or other measurement that provides an assessment of the presence, absence, or severity of the studied disease. Treatment administration is then determined based on this control. The second variable is the treatment protocol or regimen, which can be influenced by multiple factors such as drug dosage and type of drug used.In fact, the variety and amount of data collected throughout the clinical trial process can also guide the treatment regimen. For example, 1D clinical measurements and structural biomarkers are commonly observed, as well as 2D color and near-IR fundus and even 3D optical coherence tomography (OCT). Thus, clinical trials are designed to identify which of the aforementioned data modalities are relevant. However, because of the presence of a large quantity of these multifarious data, it is impractical to conduct trials by controlling each of these variables.

Deep learning-based artificial intelligence systems have shown an affinity for understanding and exploiting multi-modal correlations in a high dimensional space. The OLIVES dataset [1] is a multi-modal dataset for deep learning applications, providing access to 2D and 3D ophthalmic imaging data, scalar clinical labels, and biomarkers for two different eye diseases. Prior results utilizing this dataset in [1] use state-of-the-art self-supervised methods on assessing the presence of biomarkers, though results varied between biomarkers. This variance is due to the presence and absence of biomarkers being a personalization challenge rather than a generalization challenge. This phenomenon is demonstrated in Figure 1, where the variation within OCT scans of patients between visits can be minimal while the difference in manifestation of the same disease across different patients may be substantial. The domain difference between OCT scans can arise due to pathology manifestation across patients (Fig.1a and Fig.1b), clinical labels (Fig.1c), and the visit along the treatment process when the scan is taken (Fig.1d). Thus, while deep learning algorithms have shown an affinity towards generalization, they are lacking in personalization.

Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (1)

Due to prior deep learning approaches’ inability to balance generalization and personalization, we use the OLIVES dataset to host the 2023 Video and Image Processing (VIP) Cup to further explore and address this issue in biomarker detection of ophthalmology data. The VIP Cup offers a unique experience to undergraduates, allowing students to work together to solve challenging, real-world problems with video and image processing techniques. Each team includes one faculty member serving as an advisor, at most one graduate student as a mentor, and at least three but no more than ten undergraduate students. Formed teams participate in an open competition, with the top three teams presenting their work at the final competition held at the 2023 IEEE International Conference on Image Processing (ICIP) in Kuala Lumpur, Malaysia. Travel costs of finalist teams are supported by the IEEE Signal Processing Society (SPS). In this article, we share an overview of the VIP Cup experience including competition tasks, participating teams, technical approaches, statistics, and competition experience.

II Tasks and Evaluation Criteria

The 2023 VIP Cup competition consists of two distinct phases, each with unique challenges and assessment criteria. In this section, we provide an overview of the tasks teams are challenged to complete, along with the provided resources and the criteria teams are evaluated on.

II-A VIP Cup Tasks

The competition takes place across two phases, which aim at testing teams’ approaches to both the generalization and personalization challenges present within ophthalmology data. Throughout this section, we describe the specific tasks present within each competition phase.

ModalityPer VisitPer EyeTrain TotalPhase 1 Test SetPhase 2 Test Set
OCT49Np49subscript𝑁𝑝49N_{p}*49italic_N start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ∗ 49781893871250
Clinical4Np4subscript𝑁𝑝4N_{p}*4italic_N start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ∗ 45082320250
Biomarker6Np496subscript𝑁𝑝496N_{p}*49*6italic_N start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ∗ 49 ∗ 6469134232261500

II-A1 Open Competition Phase 1 - Generalization Task

The overall task in Phase 1 was to predict the presence or absence of six different biomarkers on every OCT scan in the provided test set. This task, in general, can be challenging due to different biomarkers exhibiting varying levels of granularity, where some biomarkers are more visually apparent than others.

The training set for this task was derived from the OLIVES dataset and consisted of the labeled OCT scans, along with the associated clinical information and the corresponding biomarker labels as the ground truth. Participants were encouraged to make use of all available modalities. Additionally, the participants were provided with a testing dataset that was derived from a recent clinical study in collaboration with Retina Consultants of Texas (Houston, TX, USA). However, teams were only provided with these OCT scans and their clinical information.Once teams had developed their approach, they submitted their algorithms to Codalab for evaluation.

II-A2 Open Competition Phase 2 - Personalization Task

In Phase 1 of the competition, each slice in the test set was treated as its own independent entity. In reality, however, every set of 49 slices within the test set is associated with a specific patient’s eye. Hence, practitioners may be interested in performance with respect to the patient’s eye as a whole, rather than performance with respect to isolated slices of the retina. Thus, in Phase 2 of the competition the organizers better assess how well the model is able to personalize.

For Phase 2, the organizers introduce a new held-out test set that replaces the previous test set from the first stage of the competition. This new test set is labeled by our medical partners from Retina Consultants of Texas in the same manner as the first stage, with every image associated with 6 biomarkers. In the Phase 1 test set, a large pool of images are drawn from different visits of the same patient. Hence, Phase 1 test set has substantial redundancy even with a larger number of images, as evidenced by TableI.The Phase 2 test set, on the other hand, is drawn from a larger patient pool and population base which allows a better assessment of both the generalizability and personalization aspects of the submitted algorithms. To perform the personalization aspect of the competition, the registered teams from Phase 1 were given the opportunity to re-train their models and submit the biomarker prediction files for each image in the test set. The overall finalists were then drawn from these performance outcomes on the Phase 2 test set.

II-B Resources

Teams were provided with access to the labeled OLIVES dataset and another ophthalmology clinical trial dataset from Retina Consultants of Texas for Phase 1, as well as initial Python code using this dataset [1]. In Phase 2, we provide a subset of Kermany dataset for training purposes [2]. We list a summary of the overall amount of multimodal data included for Phase 1 and Phase 2 in Table I, detailing information about the amount of data included in both train and test sets. The difference in challenge tasks is clearly indicated in this table. For example, in Phase 2, our new held-out test set contained biomarker labels from the Kermany dataset labeled privately by our medical partners [2]. This difference between Phase 1 and Phase 2 test sets is demonstrated in Table I, where the Phase 2 test set has a much larger patient diversity (167 vs. 40 unique patients) while having fewer images (250 vs. 3871). In addition, teams were provided with potential suggestions for improving detection, such as using integrating clinical values into the designed algorithm as recently done in contrastive learning literature using OCT data [3], [4].

II-C Evaluation Criteria

We implement two distinct evaluation approaches to judge the teams’ generalization and personalization approaches in Phase 1 and 2. The top three teams at the end of Phase 2 are then evaluated to determine the final rankings. In this section, we go over the evaluation criteria for each phase, as well as the criteria the finalists are judged on.

II-C1 Phase 1 Evaluation

For Phase 1, we draw the test set from an entirely separate clinical trial with the same disease pathology as the training set, where each slice in the test set is treated as its own independent entity in the Phase 1 biomarker detection task. Thus, we make use of the macro averaged F1-score to measure the performance for this phase of the competition. This metric is the equally weighted average of the F1-scores for each of the biomarkers in the 16 biomarker classification task. This metric is desirable because the test set is highly imbalanced, making it impossible to guarantee an equal distribution of biomarkers within each OCT scan. The F1 score is more sensitive to this dataset imbalance, as it is the harmonic mean of precision and recall. In addition, we want our metric to treat each class of biomarker as equally important, rather than reflecting a bias towards classes with more instances present. Therefore, we will designate the top teams as those with the highest macro-averaged F1-score across all 6 biomarkers. With the starter code provided, we achieve a baseline score of 0.6315 when using the described metric for the Phase 1 dataset.

II-C2 Phase 2 Evaluation

For Phase 2, the test set is from a general population setting and not from a clinical trial.For this portion of the competition, we compute the macro F1-score for each slice in the same way as Phase 1. Unlike Phase 1, however, we will now average the F1-scores with respect to each set of slices associated with an individual patient, rather than averaging across the test set as a whole. This will result in an associated macro F1-score with respect to each individual patient.The overall ranking of each team is established by averaging the scores across all patients. The three teams with highest performance in Phase 2 are then invited to present their work in the final phase at ICIP 2023. With the provided starter code, we achieve a baseline result of 0.6943 using the described evaluation metric for the Phase 2 data.

II-C3 Finalist Judging Criteria

Judging for the final phase of the competition was held live at the ICIP 2023 conference. It was based on five equally weighted criteria. Each of the three finalist teams are scored on the five criteria and the team with the highest score places 1st, the team with the second highest score places 2nd, and the team with the third highest score places 3rd in the competition. The five equally weighted criteria are: (1) Innovation of the proposed approach, (2) Performance of the team on generalization metric, (3) Performance of the team on personalization metric, (4) Quality and clarity of the final report, and (5) Quality and clarity of the presentation. Each criterion is scored with a 1, 2, or 3; the best team in each criterion receives 3 points, the second-best team receives 2 points, and the third best team receives 1 point. The final winning rankings are based on the highest points awarded from the five criteria during judge deliberations at the end of the competition. Final rankings are ultimately decided by the judges at their discretion.

III Participation Statistics

The 2023 VIP CUP experienced an unprecedented surge in participation, demonstrating its widespread appeal and global engagement. Hosted on Codalab, the competition garnered an astounding total of 3,501 submissions, reflecting the remarkable enthusiasm and dedication of the participants. The competition witnessed an average of 80 daily submissions, underscoring the sustained interest and commitment throughout its duration. In total, the event attracted 17 registered teams using ICIP’s Conference Management Services (CMS). The vibrant participation was further highlighted by the involvement of 121 individual participants, showcasing the diverse and competitive nature of the field. We highlight daily submission statistics gathered from Codalab in Figure 2 throughout the competition. Notably, the global reach of the VIP Cup 2023 was evident, with significant representations from countries such as Bangladesh, India, Sri Lanka, and China, emphasizing the competition’s truly international character.

Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (2)

IV Finalists

The finalist teams of the 2023 VIP Cup, as well as their respective rankings are as follows:

Team Synapse (first place)

  • Affiliation: Bangladesh University of Engineering and Technology

  • Students: H.A.Z. Sameen Shahgir, Khondker Salman Sayeed, Tanjeem Azwad Zaman, Md. Asif Haider

  • Mentor: Sheikh Saifur Rahman Jony

  • Supervisor: M. Sohel Rahman

Team Neurons (second place)

  • Affiliation: Bangladesh University of Engineering and Technology

  • Students: Md. Touhidul Islam, Md. Abtahi Majeed Chowdhury, Mahmudul Hasan, Asif Quadir

  • Supervisor: Lutfa Akter

Team IITH (third place)

  • Affiliation: Indian Institute of Technology, Hyderabad

  • Students: Aaseesh Rallapalli, Lokesh Badisa, Nithish S, Utkarsh Doshi

  • Supervisor: Soumya Jana

Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (3)

V Highlights of Technical Approaches

The finalists for the 2023 VIP CUP Competition are teams Neurons, IITH, and Synapse. In the development of each approach, all teams noted that they had to come to terms with resolving the discrepancy between local and global structures in the provided data. This problem for the associated OCT imaging data is discussed extensively in [5] where the authors discuss how algorithms must account for biomarkers with both fine and coarse granularity. The challenge in medical data is that both types of biomarkers can appear simultaneously and models are more likely to identify the coarse biomarkers while showing poor performance on the fine-grained biomarkers.Below, we summarize each approach and specifically highlight, where appropriate, attempts by each team of addressing this granularity issue. Afterwards, we analyze how their differences manifest across performance metrics related to both the personalization and generalization tasks of this competition.

Team IITH

surprisingly showed that just an Inception v3 [6] backbone with sufficient hyperparameter tuning was enough to obtain competitive performance. The participants arrived at this conclusion after a wide architecture search across a variety of convolutional and transformer based architectures. The team notes that specialized architectural considerations in Inception v3 such as factorizing convolutions, spatially separable convolutions, and efficient grid reduction may enable this architecture to perform feature extraction at multiple scales that is optimal for the biomarker detection task.

Phase 1 Biomarker-Specific Generalization F1-Score
TeamB1B2B3B4B5B6Overall
Elemenopi0.6560.7560.7840.8320.8520.9920.7518
IITH0.6840.740.7320.940.8640.9920.747
MEA0.380.620.5920.6360.1560.020.197
Neurons0.6480.760.7880.960.7680.9840.779
Optimpus0.6880.4880.5120.7760.8520.9920.646
Pixel Pulse0.6520.6640.6560.9120.7880.5640.532
Pixel Vision0.4760.6480.60.6480.160.0720.398
Sharks0.6520.6960.6320.6160.8480.9920.668
Source Code0.6760.5520.4920.8480.8840.9160.6133
Spectrum0.7160.740.7480.9080.8040.9680.7504
Synapse0.7120.7760.8360.9560.860.9920.806
Tesseract0.7320.6760.5920.9320.8480.9920.644
Ultrabot AIO0.7320.6320.6040.920.8560.9760.721
UNNC_ISEAN0.760.740.7760.1360.820.1320.568
UNNC_POWER0.720.7280.7560.5360.8840.0360.543
Phase 2 Biomarker-Specific Personalization F1-Score
TeamB1B2B3B4B5B6Overall
IITH0.68930.75280.75780.94830.85160.9880.8215
Neurons0.63970.77620.78910.96570.76840.98470.8116
Synapse0.71730.79460.85250.96320.84560.9880.8527
Elemenopi0.65270.77340.81220.8170.83960.9880.7966
MEA0.38610.62360.5920.6360.1720.02990.3822
Optimpus0.68970.50870.52070.79020.85060.9880.697
Pixel Pulse0.64770.67090.69540.91830.79260.5560.6984
Pixel Vision0.47740.64660.61880.69030.17270.06890.4139
Sharks0.6470.68890.62820.61910.83060.9880.7056
Source Code0.70280.5870.49770.86270.88490.92540.7274
Spectrum0.73260.75180.75050.90680.80380.96760.8067
Tesseract0.7440.69280.61580.93780.83360.9880.7921
Ultrabot AIO0.74210.63640.59630.93110.84250.97570.7723
UNNC_ISEAN0.7710.76580.79970.13790.80860.13490.5426
UNNC_POWER0.71780.74640.78970.52410.87390.04790.593
Team Neurons

used their signal and image processing knowledge to understand that feature extraction is an important part of classification, particularly due to how visually different the biomarkers appear [7]. Hence, they made the insight that convolutional networks (CNN) are good at local feature extraction with their fixed kernels, while transformers use attention to extract features from a global context. Therefore, using an ensemble of both enables extraction of both types of features. Specifically, the used an EfficientNet [8] architecture for local feature extraction and a Max-ViT [9] transformer for global understanding. They then split the overall OLIVES dataset based on TREX or PRIME clinical trials specifically. The ensemble was then formed by training their models on the subsets TREX + PRIME, TREX only, and PRIME only. This allows different subsets of the models to be tuned to the different types of features that each trial provides. During inference, each pair of models trained on each subset to produce a prediction that are then used in an ensemble fashion to produce the final decision.

Team Synapse

similarly used their image and signal processing knowledge to inform their observation that an ensemble of models with one performing local feature extraction and one performing global attention is optimal for this task due to biomarkers of varying granularity [10]. However, in contrast to team Neurons, they used a vision transformer with a convolutional block as the local feature extractor, rather than a CNN alone. In this case, Max-VIT [9] was the local extractor and EVA-02 [11] performed global attention. The team notes that the the use of normal attention compared to strided attention causes the global vs. local feature extraction capability. An ensemble was then created by having Max-VIT predict only fine-grained biomarkers, while EVA-02 was responsible for coarse-grained biomarkers.

A closer look at the performance of each approach in a variety of settings provides deeper insight into the personalization and generalization capabilities of each algorithm. In Table II, we note the performance of all teams on individual biomarkers, where the highest performance per biomarker is denoted in bold. All teams performed similarly well on B4, B5, and B6, but a substantial gap exists on B1, B2, and B3. Team Synapse performed better on the more difficult fine-grained biomarkers compared to either of the other two finalists. This trend holds for both the personalization and generalization performance.

Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (4)

Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (5)

Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (6)

Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (7)

We also ranked each team in Figure 3 based on their generalization performance on each biomarker. Team Synapse and IITH performed consistently well across all biomarkers. Team Neurons exhibited higher variation as shown by better performance on specific biomarkers, but worse on others. We also ranked teams based on the number of patients where performance crossed a satisfactory F1-score threshold (0.8) in Figure 4.In Figures 5 we show a scatter plot that demonstrates how correlated performance on the generalization and personalization tasks are for each team.We observe that personalization performance correlates with performing well on a higher number of patients as shown in Figure 6. In both plots, the top 3 teams clustered in the high performing top-right region of the figures. Additionally, this comes with a noticeable separation from the overall winner Team Synapse.

VI Insights and Takeaways

In this section, we discuss insights used to create the VIP Cup 2023 challenge and the issues faced throughout the competition. From an organization standpoint, the topic of integrating artificial intelligence (AI) within medicine is one that has gained much attention within the research community within recent years and presents an excellent avenue towards introducing students to real-world applications of artificial intelligence. Ophthalmology, in particular, has been described as the first clinical application domain that has the potential for full automation [12]. This is primarily because OCT and fundus photography are oftentimes easier to derive features from due to easier noise characteristics compared to other modalities of medical imaging tasks, making this type of data more amenable to AI algorithms that rely on deriving standard features to distinguish different classes. This is demonstrated in Figure 7, where the structures present in OCT are clearly visible, as opposed to the images taken from a dermatology (DermMNIST) and chest x-ray dataset (PadChest) [13], [14].

Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (8)

Another advantage of using OCT data is that it is associated with a clinical trial structure, rather than as isolated instances[15]. The clinical trial structure enables the existence of many different modalities such as textual documents, clinical values, and other imaging modalities[16]. Therefore, students get exposure to the existence and interaction of these different axes within the medical domain. Furthermore, the clinical trial structure acts as a regularizer in terms of the scope of the data they have access to. In other words, the size of the data they have access to is limited by the resources of the clinical trial thus preventing the existence of too large of a data source for students to work with effectively. This is important within the context of the VIP Cup such that the competition is accessible to students from different regions and backgrounds who may have lesser access to computational sources than others. Additionally, the clinical trial structure ensures that data is composed from wider demographic sources, which in turn ensures that the solutions devised by the students is indeed generalizable.

We now discuss the competition from an organization standpoint, as observed by the organizing committee (Figure 8). When running the VIP Cup competition, we initially utilized Codalab for our project needs, which had automatic migration of Phase 1 submissions for hidden Phase 2 evaluations. However, we encountered performance and accessibility issues. Due to the high volume of traffic and migration of submissions on the Codalab platform, the server crashed. To combat this problem, we established a mirror site, where the submission resulted in a direct scoring without need for migration. This change drastically improved performance and reliability. Submissions were seamless with faster scoring, proving that the establishment of the mirror site was an efficient solution. This observation was corroborated by participating teams, as one team noted that for future VIP Cup iterations a more mature platform might be a better solution. Despite this issue, the competition proved to be a valuable experience for teams. Teams were able to to directly relate their knowledge of machine learning and signal processing to a real-life biomedical application. Teams also felt they were able to understand the challenges of applying machine learning to such applications, where creative approaches were taken to account for the varying granularity levels amongst biomarkers. Teams also felt the VIP Cup experience improved team members’ written and oral communication skills, as well as leadership abilities. In particular, Team Synapse notes the following for the VIP Cup challenge: \sayAll in all, VIP Cup 2023 was a monumental experience for our team, and thanks once again.

Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (9)

VII Conclusion

One of the major advantages of the VIP Cup is encouraging undergraduate students to tackle real-world challenges under the support of a team of organizers and mentors within their own institution as well as between a variety of institutions. In this iteration of the VIP Cup, students were tasked with developing a machine learning algorithm to detect biomarkers of disease within an OCT clinical trial setting. Specifically, we challenged teams to find a balance between generalization and personalization for biomarker detection in ophthalmology data, encouraging teams to learn more about the complexities of medical data and how deep learning needs to adapt to such obstacles. Below, we highlight a summary of the takeaways from this competition:

  • Noticing which architectures perform better at local versus global feature detection and ensembling these models presents an effective approach to the generalization and personalization challenge present in medical data.

  • Effective approaches for these tasks are also achieved through simpler methods, such as adopting a model backbone and adding sufficient hyperparameter tuning and other changes within the model architecture.

  • Some biomarkers remain a challenge to detect overall, such as B1, where none of the top three finalists achieved the highest score for detection of this biomarker.

  • A reliable platform for competition hosting must be chosen, including backup plans to account for the high volume of traffic.

Overall, this competition gave undergraduates an opportunity to learn about how artificial intelligence can be a powerful tool for the medical field, as well as the unique challenges one faces when applying machine learning to biomedical data.

VIII Acknowledgments

We would like to thank the judges, Lucas Thomaz, Kenneth Kin Man Lam, Mohit Prabhushankar and Ghassan AlRegib, for their time and feedback during this process. We would also like to extend thanks to Jaqueline Rash, who assisted with the competition website and helped arrange the logistics of the competition. This work was partially funded by FCT/MCTES through national funds and, when applicable, co-funded by EU funds under the projects UIDB/50008/2020 and LA/P/0109/2020. This material is also based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2039655. Any opinion, findings, and conclusions or recommendations expressed in this material are thoseof the authors(s) and do not necessarily reflect the views of the National Science Foundation.

IX Authors

Ghassan AlRegib (alregib@gatech.edu) obtained his Ph.D. in Electrical Engineering from the Georgia Institute of Technology (Georgia Tech). He is currently the John and Marilu McCarty Chair Professor in the School of Electrical and Computer Engineering at Georgia Tech. In the Omni Lab for Intelligent Visual Engineering and Science (OLIVES), he and his group work on robust and interpretable machine learning algorithms, uncertainty and trust, and human in the loop algorithms. He has been issued several U.S. patents and invention disclosures. He is a Fellow of the IEEE. Prof. AlRegib is active in the IEEE. He served on the editorial board of several transactions and served as the TPC Chair for ICIP 2020, ICIP 2024, and GlobalSIP 2014. He was the area editor for the IEEE Signal Processing Magazine. In 2008, he received the ECE Outstanding Junior Faculty Member Award. In 2017, he received the 2017 Denning Faculty Award for Global Engagement. He and his students received the Best Paper Award in ICIP 2019. He is a Fellow of IEEE.

Mohit Prabhushankar (mohit.p@gatech.edu) received his Ph.D. degree in electrical engineering from the Georgia Institute of Technology (Georgia Tech), Atlanta, Georgia, 30332, USA, in 2021. He is currently a Postdoctoral Researcher and Teaching Fellow in the School of Electrical and Computer Engineering at the Georgia Institute of Technology in the Omni Lab for Intelligent Visual Engineering and Science (OLIVES) lab. He is working in the fields of image processing, machine learning, explainable and robust AI, active learning, and healthcare. He is the recipient of the Best Paper award at ICIP 2019 and Top Viewed Special Session Paper Award at ICIP 2020. He is the winner of the Roger P Webb ECE Graduate Research Excellence award in 2022. He is an IEEE Member.

Kiran Kokilepersaud (kpk6@gatech.edu) obtained a B.S. degree in Electrical Engineering from the University of Maryland. He is currently a Ph.D. student in electrical and computer engineering at the Georgia Institute of Technology (Georgia Tech), Atlanta, Georgia, 30332, USA. He is now a Graduate Research Assistant in the School of Electrical and Computer Engineering at the Georgia Institute of Technology in the Omni Lab for Intelligent Visual Engineering and Science (OLIVES) lab. He is a recipient of the Georgia Tech President’s Fellowship for excellence amongst incoming Ph.D. students. His research interests include digital signal and image processing, machine learning, and its associated applications within the medical field. He is an IEEE Student Member.

Prithwijit Chowdhury (pchowdhury6@gatech.edu) received his B.Tech. degree from KIIT University, India in 2020. He joined Georgia Institute of Technology as an MS student in the department of Electrical and Computer Engineering in 2021. He is currently pursuing his Ph.D. degree as a researcher in The Center for Energy and Geo Processing (CeGP) as a member of the Omni Lab for Intelligent Visual Engineering and Science (OLIVES). His interests lie in the areas of digital signal and image processing and machine learning with applications to geophysics. He is an IEEE Student Member.

Zoe Fowler (zfowler3@gatech.edu) received her B.S. degree in Electrical Engineering from Mississippi State University. She is now a PhD student in electrical and computer engineering at Georgia Institute of Technology (Georgia Tech), Atlanta, Georgia, 30332, USA. She is currently a researcher in the Omni Lab for Intelligent Visual Engineering and Sciences (OLIVES). She is a recipient of the National Science Foundation Graduate Research Fellowship Program (NSF GRFP) that recognizes graduate students based on their research and academic achievements, as well as the Georgia Tech President’s Fellowship. Her interests lie in the areas of digital signal and image processing and machine learning with applications to healthcare. She is an IEEE Student Member.

Stephanie Trejo Corona is a clinical research fellow at Retina Consultants of Texas. She graduated from Rice University with a B.S. in Biochemistry and Cell Biology and a B.A. in Kinesiology in 2021. She has previously worked in the fields of synthetic and plant biology, with a passion for translational research. She was awarded the Rice University Biosciences Department Distinction in Research and Creative Work as well as multiple recognitions for her research presentations during her undergraduate career. She is currently working in the field of ophthalmology and hopes to improve healthcare access and clinical outcomes through prospective clinical trials, retrospective cohort studies, and applications of artificial intelligence.

Lucas Thomaz (lucas.thomaz@co.it.pt) obtained his Ph.D. in Electrical Engineering from Universidade Federal do Rio de Janeiro. He is currently a researcher at Instituto de Telecomunicações and an associate professor in the School of Technology and Management of Polytechnic of Leiria, Leiria 2411-901, Portugal. He serves as a member of the Education Board. He is also a member of the Student Services Committee of the IEEE Signal Processing Society, supporting the Video and Image Processing Cup and the IEEE Signal Processing Cup, and the chair of the Engagement and Career Training Subcommittee. His research interests include image and video processing. He is an associate editor for IEEE Open Journal of Signal Processing. He is a Senior Member of IEEE.

Angshul Majumdar (angshul@iiitd.ac.in) is Professor at TCG CREST, Kolkata. Prior to that he was a Professor at Indraprastha Institute of Information Technology, New Delhi from 2012 to 2023. He completed his PhD from the University of British Columbia in Electrical and Computer Engineering. Angshul is currently the director of the student services committee with IEEE Signal Processing Society. He has previously served the society as chair of chapter’s committee (2016-18), chair of education committee (2019) and member-at-large of the education board (2020). He is an associate editor for IEEE Open Journal of Signal Processing and Elsevier Neurocomputing. In the past, he was an associate editor for IEEE Transactions on Circuits and Systems for Video Technology. His research interests include problems in biomedical signal processing and imaging. He is a Senior Member of IEEE.

References

  • [1]M.Prabhushankar, K.Kokilepersaud, Y.-y. Logan, S.TrejoCorona, G.AlRegib, and C.Wykoff, “Olives dataset: Ophthalmic labels for investigating visual eye semantics,” Advances in Neural Information Processing Systems, vol.35, pp. 9201–9216, 2022.
  • [2]D.S. Kermany, M.Goldbaum, W.Cai, C.C. Valentim, H.Liang, S.L. Baxter, A.McKeown, G.Yang, X.Wu, F.Yan etal., “Identifying medical diagnoses and treatable diseases by image-based deep learning,” cell, vol. 172, no.5, pp. 1122–1131, 2018.
  • [3]K.Kokilepersaud, M.Prabhushankar, and G.AlRegib, “Clinical contrastive learning for biomarker detection,” arXiv preprint arXiv:2211.05092, 2022.
  • [4]K.Kokilepersaud, M.Prabhushankar, G.AlRegib, S.T. Corona, and C.Wykoff, “Gradient-based severity labeling for biomarker classification in oct,” in 2022 IEEE International Conference on Image Processing (ICIP).IEEE, 2022, pp. 3416–3420.
  • [5]K.Kokilepersaud, S.T. Corona, M.Prabhushankar, G.AlRegib, and C.Wykoff, “Clinically labeled contrastive learning for oct biomarker classification,” IEEE Journal of Biomedical and Health Informatics, 2023.
  • [6]C.Szegedy, W.Liu, Y.Jia, P.Sermanet, S.Reed, D.Anguelov, D.Erhan, V.Vanhoucke, and A.Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
  • [7]M.T. Islam, M.A.M. Chowdhury, M.Hasan, A.Quadir, and L.Aktar, “Ophthalmic biomarker detection with parallel prediction of transformer and convolutional architecture.”
  • [8]M.Tan and Q.Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning.PMLR, 2019, pp. 6105–6114.
  • [9]Z.Tu, H.Talebi, H.Zhang, F.Yang, P.Milanfar, A.Bovik, and Y.Li, “Maxvit: Multi-axis vision transformer,” in European conference on computer vision.Springer, 2022, pp. 459–479.
  • [10]H.S. Shahgir, K.S. Sayeed, T.A. Zaman, M.A. Haider, S.S.R. Jony, and M.S. Rahman, “Ophthalmic biomarker detection using ensembled vision transformers ieee sps vip cup 2023: Team synapse.”
  • [11]Y.Fang, Q.Sun, X.Wang, T.Huang, X.Wang, and Y.Cao, “Eva-02: A visual representation for neon genesis,” arXiv preprint arXiv:2303.11331, 2023.
  • [12]E.Beede, E.Baylor, F.Hersch, A.Iurchenko, L.Wilcox, P.Ruamviboonsuk, and L.M. Vardoulakis, “A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy,” in Proceedings of the 2020 CHI conference on human factors in computing systems, 2020, pp. 1–12.
  • [13]J.Yang, R.Shi, D.Wei, Z.Liu, L.Zhao, B.Ke, H.Pfister, and B.Ni, “Medmnist v2-a large-scale lightweight benchmark for 2d and 3d biomedical image classification,” Scientific Data, vol.10, no.1, p.41, 2023.
  • [14]A.Bustos, A.Pertusa, J.-M. Salinas, and M.DeLa Iglesia-Vaya, “Padchest: A large chest x-ray image dataset with multi-label annotated reports,” Medical image analysis, vol.66, p. 101797, 2020.
  • [15]Z.Fowler, K.P. Kokilepersaud, M.Prabhushankar, and G.AlRegib, “Clinical trial active learning,” in Proceedings of the 14th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, 2023, pp. 1–10.
  • [16]Y.-y. Logan, M.Prabhushankar, and G.AlRegib, “Decal: Deployable clinical active learning,” arXiv preprint arXiv:2206.10120, 2022.
Ophthalmic Biomarker Detection: Highlights from the IEEE Video and Image Processing Cup 2023 Student Competition (2024)
Top Articles
Mayor Adams Announces NYCHA Will Accept Section 8 Housing Choice Voucher Applications For First Time
i believe in you but also bigfoot: Bekleidung und Accessoires - Schuhe, Hosen, Tops - freenet.de
Google Jobs Denver
Best Places To Get Free Furniture Near Me | Low Income Families
Davaba19
Barber Gym Quantico Hours
Rickrolling Link Generator
Use Caution: Herds of wild horses escaping Davis Fire spotted evacuating up Geiger Grade
Celebrity Guest Tape Free
16Th Or 16Nd
Ecolab Mppa Charges
Nearest Walmart Address
14314 County Road 15 Holiday City Oh
Teenbeautyfitness
Vector Driver Setup
Maximise Your Funding: Key Insights on Accounting for Grants
Amazing Lash Bay Colony
Water Leaks in Your Car When It Rains? Common Causes & Fixes
Brake Masters 208
The Athenaeum's Fan Fiction Archive & Forum
Craigslist Goats For Sale By Owner Near Me
Meine Erfahrung mit Textbroker als Autor (inkl. Beispiel zu Verdienst)
Zillow Group, Inc. Aktie (A14NX6) - Kurs Nasdaq - MarketScreener
Zen Leaf New Kensington Menu
Moss Adams Client Portal
Weer Maasbracht - Vandaag - Morgen - 14 dagen
G4 Vore
321 Flea Market Gastonia Nc
Cal Poly 2027 College Confidential
Jill Vasil Sell Obituary
Lincoln Financial Field Section 110
Optimizing Sports Performance Pueblo
Courierpress Obit
Rule 34 Supreme Court: Key Insights and Implications
After the Yankees' latest walk-off win, ranking which starters might be headed to the bullpen
What Auto Parts Stores Are Open
Oprichter Haagse rapgroep SFB doodgeschoten, wie was hij?
Adding Performance to Harley Davidson & Motorcycles is Easy with K&N
Watch ESPN - Stream Live Sports & ESPN Originals
Rolla Mo Craigslist
10000 Divided By 5
Mere Hint Crossword
My Vidant Chart
Trực tiếp bóng đá Hà Nội vs Bình Định VLeague 2024 hôm nay
David Knowles, journalist who helped make the Telegraph podcast Ukraine: The Latest a runaway success
Craigslist Farm And Garden Atlanta Georgia
Amazing Lash Bay Colony
Pioneer Library Overdrive
Penn Highlands Mon Valley | Penn Highlands Healthcare
H'aanit's Third Chapter | Gamer Guides: Your ultimate sou...
O'reilly's Covington Tennessee
Enchiladas Suizas | Mexican Food Recipes, Quick and Easy.
Latest Posts
Article information

Author: Carlyn Walter

Last Updated:

Views: 5589

Rating: 5 / 5 (70 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Carlyn Walter

Birthday: 1996-01-03

Address: Suite 452 40815 Denyse Extensions, Sengermouth, OR 42374

Phone: +8501809515404

Job: Manufacturing Technician

Hobby: Table tennis, Archery, Vacation, Metal detecting, Yo-yoing, Crocheting, Creative writing

Introduction: My name is Carlyn Walter, I am a lively, glamorous, healthy, clean, powerful, calm, combative person who loves writing and wants to share my knowledge and understanding with you.