Hi, thanks for creating and maintaining PromptPapers. It is a very useful collection for people who want to study prompt based tuning for pre trained language models.
I wanted to ask whether a framework style paper that uses prompt structures for LLM self healing would be considered in scope for this list.
Project name: WFGY
Main paper DOI: 10.6084/m9.figshare.30338884
Main paper PDF: https://github.com/onestardao/WFGY/blob/main/I_am_not_lizardman/WFGY_All_Principles_Return_to_One_v1.0_PSBigBig_Public.pdf
GitHub repo: https://github.com/onestardao/WFGY
What WFGY 1.0 is
A framework paper that models LLMs as self healing systems and designs multi step prompt structures to detect and repair failures.
Focuses on prompt level control and module composition instead of changing model weights.
Provides concrete prompt templates and workflows that can be used directly in practice with existing models.
Published with a public DOI and an accompanying open source repo.
Why it might fit this project
Many readers of PromptPapers are interested not only in tuning methods but also in practical prompt frameworks that improve reliability.
WFGY 1.0 can be seen as a prompt level self healing framework, sitting next to more classical prompt tuning and instruction optimization work.
It may be useful as a bridge between academic prompt tuning papers and real world debugging workflows.
Possible way to integrate
If you have or plan to have a small section for framework or application oriented papers, WFGY 1.0 could be listed there as a case study on prompt based self healing.
Alternatively, it could be placed in a miscellaneous or related resources section for readers who want to explore prompt frameworks beyond the core tuning literature.
If you prefer to keep this list strictly for traditional peer reviewed tuning papers and do not want to include framework style work like this, that is completely understandable.
Thank you again for curating and maintaining this resource.
Hi, thanks for creating and maintaining PromptPapers. It is a very useful collection for people who want to study prompt based tuning for pre trained language models.
I wanted to ask whether a framework style paper that uses prompt structures for LLM self healing would be considered in scope for this list.
Project name: WFGY
Main paper DOI: 10.6084/m9.figshare.30338884
Main paper PDF: https://github.com/onestardao/WFGY/blob/main/I_am_not_lizardman/WFGY_All_Principles_Return_to_One_v1.0_PSBigBig_Public.pdf
GitHub repo: https://github.com/onestardao/WFGY
What WFGY 1.0 is
A framework paper that models LLMs as self healing systems and designs multi step prompt structures to detect and repair failures.
Focuses on prompt level control and module composition instead of changing model weights.
Provides concrete prompt templates and workflows that can be used directly in practice with existing models.
Published with a public DOI and an accompanying open source repo.
Why it might fit this project
Many readers of PromptPapers are interested not only in tuning methods but also in practical prompt frameworks that improve reliability.
WFGY 1.0 can be seen as a prompt level self healing framework, sitting next to more classical prompt tuning and instruction optimization work.
It may be useful as a bridge between academic prompt tuning papers and real world debugging workflows.
Possible way to integrate
If you have or plan to have a small section for framework or application oriented papers, WFGY 1.0 could be listed there as a case study on prompt based self healing.
Alternatively, it could be placed in a miscellaneous or related resources section for readers who want to explore prompt frameworks beyond the core tuning literature.
If you prefer to keep this list strictly for traditional peer reviewed tuning papers and do not want to include framework style work like this, that is completely understandable.
Thank you again for curating and maintaining this resource.