Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
armodeniz zhang's picture

armodeniz zhang

armodeniz

AI & ML interests

None yet

Organizations

None yet

Collections 1

Preference Alignment in LLM
methods that align llm with human preference
  • Contrastive Prefence Learning: Learning from Human Feedback without RL

    Paper • 2310.13639 • Published Oct 20, 2023 • 25
  • RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback

    Paper • 2309.00267 • Published Sep 1, 2023 • 51
  • A General Theoretical Paradigm to Understand Learning from Human Preferences

    Paper • 2310.12036 • Published Oct 18, 2023 • 16
  • Deep Reinforcement Learning from Hierarchical Weak Preference Feedback

    Paper • 2309.02632 • Published Sep 6, 2023 • 1
Preference Alignment in LLM
methods that align llm with human preference
  • Contrastive Prefence Learning: Learning from Human Feedback without RL

    Paper • 2310.13639 • Published Oct 20, 2023 • 25
  • RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback

    Paper • 2309.00267 • Published Sep 1, 2023 • 51
  • A General Theoretical Paradigm to Understand Learning from Human Preferences

    Paper • 2310.12036 • Published Oct 18, 2023 • 16
  • Deep Reinforcement Learning from Hierarchical Weak Preference Feedback

    Paper • 2309.02632 • Published Sep 6, 2023 • 1

models 0

None public yet

datasets 0

None public yet
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs