You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Code for reproducing the paper Improved Multilingual Language Model Pretraining for Social Media Text via Translation Pair Prediction to appear at The 7th Workshop on Noisy User-generated Text (W-NUT) organized at EMNLP 2021.
This repository presents a gemstone classification project employing Transfer Learning with MobileNetV2, processing a dataset comprising 3200+ images spanning 87 classes. TensorFlow and Keras facilitated data preprocessing, augmentation, and model training. Through fine-tuning and leveraging pre-trained features.
Explore the rich flavors of Indian desserts with TunedLlavaDelights. Utilizing the in Llava fine-tuning, our project unveils detailed nutritional profiles, taste notes, and optimal consumption times for beloved sweets. Dive into a fusion of AI innovation and culinary tradition
Unser GitHub-Repository fördert die Entwicklung von GPT für die Pflegebegutachtung, um Genauigkeit und Effizienz in der Pflege zu verbessern. Es bietet spezialisierte Datensätze, Benchmarking-Tools und Validierungscodes für Innovatoren von KI und Pflege. Beteiligen Sie sich, um die Pflegebegutachtung durch Technologie voranzutreiben.
SEIKO is a novel reinforcement learning method to efficiently fine-tune diffusion models in an online setting. Our methods outperform all baselines (PPO, classifier-based guidance, direct reward backpropagation) for fine-tuning Stable Diffusion.