19 April, 2025

Possible National Security Crisis Averted: CISA’s Reversal Extends Support for CVE Database !




The nonprofit organization MITRE, which maintains the Common Vulnerabilities and Exposures (CVE) database, said on April 15 that the US government funding for its operations will expire without renewal; however, in a last-minute reversal announced the morning of April 16, CISA said it has extended support for the database. At the same time, CVE Board members have founded the CVE Foundation, a nonprofit not affiliated with the US federal government, to maintain the CVE program.

The CVE program, which has been in place since 1999, is an essential way to report and track vulnerabilities. Many other cybersecurity resources, such as Microsoft’s Patch Tuesday update and report, refer to CVE numbers to identify flaws and fixes. Organizations called CVE Numbering Authorities are associated with MITRE and authorized to assign CVE numbers.

“CVE underpins a huge chunk of vulnerability management, incident response, and critical infrastructure protection efforts,” wrote Casey Ellis, founder of crowdsourced cybersecurity hub Bugcrowd, in an email to TechRepublic. “A sudden interruption in services has the very real potential to bubble up into a national security problem in short order.”
Funds were expected to run out on MITRE without renewal

A letter sent to CVE board members began circulating on social media on Tuesday.

“Current contracting pathway for MITRE to develop, operate, and modernize CVE and several other related programs, such as CWE, will expire,” said the letter from Yosry Barsoum, vice president and director of the Center for Securing the Homeland, a division of MITRE.

CWE is Common Weakness Enumeration, the list of hardware and software weaknesses.

“The government continues to make considerable efforts to continue MITRE’s role in support of the program,” Barsoum wrote.

MITRE is traditionally funded by the Department of Homeland Security.

DOWNLOAD: Protect your company with our premade and customizable network security policy.

MITRE did not respond to TechRepublic’s questions about the cause of the expiration or what cybersecurity professionals can expect next.

The foundation has not specified whether the cut in funding is related to the widespread cull by the Department of Government Efficiency (DOGE).
CVE Foundation has been laying the groundwork for a new system for the past year

Prior to CISA’s announcement, an independent foundation said they were prepared to step in to continue the CVE program. The CVE Foundation is a nonprofit dedicated to maintaining the CVE submission program and database.
Must-read security coverage5 Best Password Managers for Android
NordPass vs 1Password: Which Password Manager Is More Secure?
NordPass Free vs Premium: Is it Worth the Upgrade?
NordPass Review: Is it a Safe Password Manager?

“While we had hoped this day would not come, we have been preparing for this possibility.” wrote an anonymous CVE Foundation representative in a press release on Wednesday. “In response, a coalition of longtime, active CVE Board members have spent the past year developing a strategy to transition CVE to a dedicated, non-profit foundation.”

The CVE Foundation plans to detail its structure, timeline, and opportunities for involvement in the future. With CISA extending funding, the foundation may not be needed yet – although it may be reassuring to know its services and backups are available.


#ResearchDataExcellence #DataAnalysisAwards #InternationalDataAwards #ResearchDataAwards #DataExcellence #ResearchData #DataAnalysis #DataAwards #GlobalDataExcellence #DataInnovationAwards #DataResearch #ExcellenceInData #DataAwardWinners#DataAnalysisExcellence #ResearchDataInsights #GlobalResearchAwards #DataExcellenceAwards #ExcellenceInResearchData #ResearchDataLeadership #DataResearchExcellence #AwardWinningData #InternationalResearchAwards #DataAnalysisInnovation #ResearchDataAchievement #ExcellenceInDataAnalysis #GlobalDataInsights #ResearchDataSuccess #DataAwards2024

Website: International Research Data Analysis Excellence Awards

Visit Our Website : researchdataanalysis.com
Nomination Link : researchdataanalysis.com/award-nomination
Registration Link : researchdataanalysis.com/award-registration
member link : researchdataanalysis.com/conference-abstract-submission
Awards-Winners : researchdataanalysis.com/awards-winners
Contact us : contact@researchdataanalysis.com

Get Connected Here:
==================
Facebook : www.facebook.com/profile.php?id=61550609841317
Twitter : twitter.com/Dataanalys57236
Pinterest : in.pinterest.com/dataanalysisconference
Blog : dataanalysisconference.blogspot.com
Instagram : www.instagram.com/eleen_marissa 


17 April, 2025

LLM + RAG: Creating an AI-Powered File Reader Assistant !






Introduction


AI is everywhere.

It is hard not to interact at least once a day with a Large Language Model (LLM). The chatbots are here to stay. They’re in your apps, they help you write better, they compose emails, they read emails…well, they do a lot.

And I don’t think that that is bad. In fact, my opinion is the other way – at least so far. I defend and advocate for the use of AI in our daily lives because, let’s agree, it makes everything much easier.

I don’t have to spend time double-reading a document to find punctuation problems or type. AI does that for me. I don’t waste time writing that follow-up email every single Monday. AI does that for me. I don’t need to read a huge and boring contract when I have an AI to summarize the main takeaways and action points to me!

These are only some of AI’s great uses. If you’d like to know more use cases of LLMs to make our lives easier, I wrote a whole book about them.

Now, thinking as a data scientist and looking at the technical side, not everything is that bright and shiny.

LLMs are great for several general use cases that apply to anyone or any company. For example, coding, summarizing, or answering questions about general content created until the training cutoff date. However, when it comes to specific business applications, for a single purpose, or something new that didn’t make the cutoff date, that is when the models won’t be that useful if used out-of-the-box – meaning, they will not know the answer. Thus, it will need adjustments.

Training an LLM model can take months and millions of dollars. What is even worse is that if we don’t adjust and tune the model to our purpose, there will be unsatisfactory results or hallucinations (when the model’s response doesn’t make sense given our query).

So what is the solution, then? Spending a lot of money retraining the model to include our data?

Not really. That’s when the Retrieval-Augmented Generation (RAG) becomes useful.

RAG is a framework that combines getting information from an external knowledge base with large language models (LLMs). It helps AI models produce more accurate and relevant responses.

Let’s learn more about RAG next.
What is RAG?

Let me tell you a story to illustrate the concept.

I love movies. For some time in the past, I knew which movies were competing for the best movie category at the Oscars or the best actors and actresses. And I would certainly know which ones got the statue for that year. But now I am all rusty on that subject. If you asked me who was competing, I would not know. And even if I tried to answer you, I would give you a weak response.

So, to provide you with a quality response, I will do what everybody else does: search for the information online, obtain it, and then give it to you. What I just did is the same idea as the RAG: I obtained data from an external database to give you an answer.

When we enhance the LLM with a content store where it can go and retrieve data to augment (increase) its knowledge base, that is the RAG framework in action.

RAG is like creating a content store where the model can enhance its knowledge and respond more accurately.

User prompt about Content C. LLM retrieves external content to aggregate to the answer. Image by the author.

Summarizing:Uses search algorithms to query external data sources, such as databases, knowledge bases, and web pages.
Pre-processes the retrieved information.
Incorporates the pre-processed information into the LLM.
Why use RAG?

Now that we know what the RAG framework is let’s understand why we should be using it.

Here are some of the benefits:Enhances factual accuracy by referencing real data.
RAG can help LLMs process and consolidate knowledge to create more relevant answers
RAG can help LLMs access additional knowledge bases, such as internal organizational data
RAG can help LLMs create more accurate domain-specific content
RAG can help reduce knowledge gaps and AI hallucination

As previously explained, I like to say that with the RAG framework, we are giving an internal search engine for the content we want it to add to the knowledge base.

Well. All of that is very interesting. But let’s see an application of RAG. We will learn how to create an AI-powered PDF Reader Assistant.
Project

This is an application that allows users to upload a PDF document and ask questions about its content using AI-powered natural language processing (NLP) tools. The app uses Streamlit as the front end.
Langchain, OpenAI’s GPT-4 model, and FAISS (Facebook AI Similarity Search) for document retrieval and question answering in the backend.

Let’s break down the steps for better understanding:Loading a PDF file and splitting it into chunks of text.This makes the data optimized for retrieval
Present the chunks to an embedding tool.Embeddings are numerical vector representations of data used to capture relationships, similarities, and meanings in a way that machines can understand. They are widely used in Natural Language Processing (NLP), recommender systems, and search engines.
Next, we put those chunks of text and embeddings in the same DB for retrieval.
Finally, we make it available to the LLM.
Data preparation

Preparing a content store for the LLM will take some steps, as we just saw. So, let’s start by creating a function that can load a file and split it into text chunks for efficient retrieval.

#ResearchDataExcellence #DataAnalysisAwards #InternationalDataAwards #ResearchDataAwards #DataExcellence #ResearchData #DataAnalysis #DataAwards #GlobalDataExcellence #DataInnovationAwards #DataResearch #ExcellenceInData #DataAwardWinners#DataAnalysisExcellence #ResearchDataInsights #GlobalResearchAwards #DataExcellenceAwards #ExcellenceInResearchData #ResearchDataLeadership #DataResearchExcellence #AwardWinningData #InternationalResearchAwards #DataAnalysisInnovation #ResearchDataAchievement #ExcellenceInDataAnalysis #GlobalDataInsights #ResearchDataSuccess #DataAwards2024

Website: International Research Data Analysis Excellence Awards

Visit Our Website : researchdataanalysis.com
Nomination Link : researchdataanalysis.com/award-nomination
Registration Link : researchdataanalysis.com/award-registration
member link : researchdataanalysis.com/conference-abstract-submission
Awards-Winners : researchdataanalysis.com/awards-winners
Contact us : contact@researchdataanalysis.com

Get Connected Here:
==================
Facebook : www.facebook.com/profile.php?id=61550609841317
Twitter : twitter.com/Dataanalys57236
Pinterest : in.pinterest.com/dataanalysisconference
Blog : dataanalysisconference.blogspot.com
Instagram : www.instagram.com/eleen_marissa

16 April, 2025

Research reveals new exotic quantum phenomena in atom-nanophotonics interfaces !





Research on efficient light-matter interfaces at the nanoscale has recently sparked intense interest mostly due to their plethora of potential applications, including quantum computing and quantum sensing at a single photon level. Enhancing light-matter interactions typically involves coupling an atom to a macroscopic system, such as a large optical cavity, or connecting a dense cloud of atoms with light in free space. The next step is to couple atoms to nanophotonic structures. This integration can make light-matter interactions even stronger, resulting in more robust systems. “A question that cuts to the heart of it all is whether use of nanophotonic interfaces can reveal quantum phenomena never witnessed before and not just merely make old things work better than new,” notes Darrick Chang, European Research Council grantee and principal investigator of the ERC-funded FoQAL project. “From a theoretical standpoint, the question is how to model these new systems that look quite different from their larger counterparts,” adds Chang.
New model capturing quantum dynamics

Providing a detailed description of the quantum dynamics of atoms and light on the nanoscale is extremely challenging. This is mainly due to the large number of the atoms involved and the infinite number of light modes defining the ways light waves travel through space. The project team developed a novel and universal formalism that establishes the electronic states (‘spins’) of atoms as the primary degrees of freedom – independent values that have the freedom to vary. In this so-called spin model, the atoms interact with each other via photon exchange. “If we solve this model, then we can derive all the quantum properties of the photons generated based on the properties of the atoms themselves. This exact formulation eliminates the need to track the infinite number of optical modes,” explains Chang.
Interference of light waves should not be neglected

Using the spin model, researchers showed that nanophotonic crystal waveguides are novel platforms where atoms and photons can interact with each other even when they are separated by relatively large distances. This type of long-range interaction, which is quite rare in most physical settings, enables observation of exotic phenomena such as quantum crystals formed by atoms held together by entanglement. The model also helped the FoQAL team gain new insight into conventional atomic gases in free space. For example, they predicted a new value (bound) for the performance of a quantum memory for light, which is exponentially better than a bound that was previously thought to be fundamental. This dramatic improvement resulted from exploiting wave interference in the emission of light from atoms, maximised when the atoms are trapped close together. Interestingly, interference is entirely ignored in traditional light-matter interfaces either because of the difficulty of treating it in the equations or because it is negligible. “Results suggest interference is an essential element that can enhance storage capability and efficiency of light-matter interfaces. It seems compelling then to examine whether interference can be used to boost other quantum applications, and whether it can lead to additional phenomena that challenge our textbook wisdom on atom-light interactions,” concludes Chang.

#ResearchDataExcellence #DataAnalysisAwards #InternationalDataAwards #ResearchDataAwards #DataExcellence #ResearchData #DataAnalysis #DataAwards #GlobalDataExcellence #DataInnovationAwards #DataResearch #ExcellenceInData #DataAwardWinners#DataAnalysisExcellence #ResearchDataInsights #GlobalResearchAwards #DataExcellenceAwards #ExcellenceInResearchData #ResearchDataLeadership #DataResearchExcellence #AwardWinningData #InternationalResearchAwards #DataAnalysisInnovation #ResearchDataAchievement #ExcellenceInDataAnalysis #GlobalDataInsights #ResearchDataSuccess #DataAwards2024

Website: International Research Data Analysis Excellence Awards

Visit Our Website : researchdataanalysis.com
Nomination Link : researchdataanalysis.com/award-nomination
Registration Link : researchdataanalysis.com/award-registration
member link : researchdataanalysis.com/conference-abstract-submission
Awards-Winners : researchdataanalysis.com/awards-winners
Contact us : contact@researchdataanalysis.com

Get Connected Here:
==================
Facebook : www.facebook.com/profile.php?id=61550609841317
Twitter : twitter.com/Dataanalys57236
Pinterest : in.pinterest.com/dataanalysisconference
Blog : dataanalysisconference.blogspot.com
Instagram : www.instagram.com/eleen_marissa

15 April, 2025

SAS Training: Everything You Need to Know to Start !

 Main image of article SAS Training: Everything You Need to Know to Start



Mastering SAS (Statistical Analysis System) can be a game-changer for professionals in data analytics, business intelligence, and machine learning. Whether you are an aspiring data analyst, data scientist, or SAS programmer, getting the right SAS training can open doors to exciting career opportunities. The demand for professionals skilled in SAS remains robust, particularly in sectors where regulatory compliance and data integrity are paramount, such as finance and healthcare.

With all of that in mind, let’s navigate SAS training, explore certification options, and identify the best courses to advance your skills. When you start learning SAS, you can get certified, and leverage your new skills for career growth, positioning yourself as a valuable asset in the competitive data analytics landscape.

What is SAS Training?

SAS is a powerful software suite used for data analysis, predictive modeling, and business intelligence. Many industries, including finance, healthcare, government, and retail, use SAS to interpret large datasets and make data-driven decisions.

Unlike some open-source tools, SAS provides a comprehensive, integrated environment that handles data from ingestion to reporting, ensuring consistency and reliability. Its robust statistical capabilities make it a preferred choice for complex analyses and regulatory reporting.
Key Benefits of SAS Training:

Industry relevance: SAS is widely used in enterprise-level data analytics, particularly in highly regulated industries where data governance and audit trails are crucial.
Higher job prospects: SAS-certified professionals earn competitive salaries and are highly sought after by employers who value data security and reliability.
Versatility: Used in diverse fields like finance (risk management, fraud detection), healthcare (clinical trials, patient data analysis), and IT (data warehousing, performance monitoring).
Powerful analytics: SAS provides advanced data management, statistical modeling, and visualization tools, including capabilities for time-series analysis, econometric modeling, and survival analysis.
Data Security and Compliance: SAS is known for it's data security features, and it's compliance with regulations like HIPAA, and GDPR.
Scalability: SAS can handle very large datasets, and scale to meet the needs of large enterprises.

Types of SAS Training Available

SAS training comes in different formats, catering to beginners, professionals, and advanced learners. Here are the primary learning paths:Online SAS CoursesSelf-paced learning: Platforms like Coursera and Udemy offer SAS courses for beginners and advanced users. These courses often include video lectures, quizzes, and practical exercises, allowing learners to progress at their own pace.
Instructor-led courses: Many universities and SAS-certified institutions provide structured programs, often including live sessions, Q&A, and personalized feedback. These courses provide a more interactive learning experience.
SAS Certification ProgramsOffered by SAS Institute and online learning platforms. These programs are designed to validate specific skills and knowledge areas, providing industry recognition.
Focuses on hands-on training with industry-relevant projects, giving learners practical experience with real-world scenarios.
Corporate TrainingLarge companies provide SAS training for employees to enhance data analytics capabilities, often tailored to specific business needs and workflows. These programs may include customized modules and internal projects.
University-Based SAS ProgramsMany universities offer SAS-specific coursework as part of data science and statistics degrees, providing a strong foundation in statistical theory and practical application. These programs often include research projects and internships.
Universities can offer graduate certificates that are focused on SAS.
Some universities partner directly with SAS to provide training.

SAS Certification: Why It Matters

Getting SAS certified validates your expertise and enhances your credibility in the job market, demonstrating your proficiency to potential employers.
Top SAS Certifications:

SAS Certified Specialist: Base Programming: Ideal for beginners learning SAS fundamentals, including data manipulation, reporting, and basic statistical procedures.
SAS Certified Advanced Programmer: Focuses on advanced data manipulation and automation, including macro programming, SQL integration, and performance optimization.
SAS Certified Data Scientist: Covers machine learning, AI, and deep learning techniques, including predictive modeling, data mining, and model evaluation.
SAS Certified Clinical Programmer: Designed for professionals in pharmaceutical and healthcare industries, focusing on data management and analysis for clinical trials.
SAS Certified Visual Business Analyst: Focuses on using SAS Visual Analytics for data visualization and interactive reporting.
SAS Certified AI and Machine Learning Professional: Covers the skills to build, deploy and manage AI and Machine Learning models.
Why Get SAS Certified?

There are some key reasons to get SAS certified:Higher salary potential: SAS-certified professionals earn 10–15% more than non-certified peers, reflecting the value of specialized skills.
Competitive job market advantage: Employers prefer certified candidates, as certification demonstrates a commitment to professional development and mastery of SAS.
Industry recognition: SAS is a leading tool in data analytics, AI, and ML, and certification signifies a high level of expertise.
Enhanced career progression: Certification can lead to promotions and more advanced roles.
Demonstrated competency: Certification verifies that the individual has met the standards set by SAS.

How to Learn SAS: Step-by-Step Guide

Here’s how to get started with SAS training:Understand the BasicsLearn SAS syntax, functions, and data manipulation, including DATA step programming and PROC procedures.
Explore data visualization and statistical techniques, such as descriptive statistics, hypothesis testing, and regression analysis.
Familiarize yourself with SAS libraries and datasets.
Enroll in a SAS Training CourseFree courses: SAS Institute offers free tutorials, including SAS OnDemand for Academics, which provides access to SAS software and learning resources.
Paid courses: Platforms like Coursera and Udemy provide certification-oriented courses, offering structured learning paths and hands-on exercises.
Practice with Real DatasetsUse SAS University Edition (free software for learners) or SAS OnDemand for Academics.
Work on data visualization and statistical modeling projects, using publicly available datasets or creating your own.
Participate in online forums and communities to get feedback and learn from others.
Take a SAS Certification ExamChoose an exam that aligns with your career goals and skill level.
Prepare using practice tests and official SAS training materials, focusing on areas where you need improvement.
Practice time management during study, as the certification exams have time limits.
Apply SAS Skills in Real-World ProjectsParticipate in data analytics projects, either at work or through freelance opportunities.
Build a portfolio showcasing SAS expertise, including code samples, data visualizations, and project reports.
Contribute to open-source projects or create your own SAS applications.

Top SAS Training Programs

Here are the best platforms for learning SAS:SAS InstituteOfficial SAS courses & certification prep, providing access to expert instructors and comprehensive learning materials.
Pros: Industry-recognized certification, hands-on training, access to official resources.
Cons: More expensive than other platforms, may require travel for in-person training.
CourseraSAS specialization courses from universities, offering structured learning paths and academic rigor.
Pros: Affordable, structured learning paths, university-backed credentials.
Cons: Requires commitment to complete courses, may have fixed start dates.
UdemyBeginner-friendly SAS programming courses, offering a wide range of topics and instructors.
Pros: Budget-friendly, lifetime access, diverse course selection.
Cons: Limited hands-on practice, quality of courses can vary.
DataCampInteractive SAS courses for data analysis and statistics, providing hands-on coding exercises and immediate feedback.
Pros: Hands-on coding exercises, structured learning, interactive environment.
Cons: Subscription-based pricing, may not cover all advanced topics.

Career Opportunities After SAS Training

Mastering SAS can lead to various high-paying careers in data analytics and business intelligence, particularly in industries that rely on robust data management and regulatory compliance.
Top Jobs for SAS Professionals:

Data Analyst: Responsible for collecting, processing, and analyzing data to provide insights for business decisions. SAS is frequently used for data cleaning, statistical analysis, and reporting.
Business Intelligence Analyst: Focuses on using SAS to create reports and dashboards that help businesses track performance and identify trends. Expertise in SAS Visual Analytics is highly valued.
SAS Programmer: Develops and maintains SAS programs for data manipulation, analysis, and reporting. Often works with large datasets and complex statistical models.
Data Scientist: Uses SAS for advanced analytics, including predictive modeling, machine learning, and data mining. Requires a strong understanding of statistical theory and programming.
Machine Learning Engineer: Applies SAS machine learning tools to build and deploy models for various applications, such as fraud detection, risk assessment, and customer segmentation.
Clinical Trials Programmer: Uses SAS to manage and analyze clinical trial data, ensuring compliance with regulatory requirements.
Risk Analyst: Employs SAS to model and assess financial risks, particularly in banking and insurance.
Database Administrator: Manages SAS databases and ensures data integrity and security.
Statistical Programmer: Develops and implements statistical models and analyses using SAS, often in research or healthcare settings.

The demand for SAS professionals is growing, especially in finance, healthcare, and retail industries, where regulatory compliance and data security are paramount. These industries value SAS for its reliability, audit trails, and robust data management capabilities.

How to Get SAS Certified

To become SAS certified, follow these steps:Choose the right certification: Consider your career goals and current skill level. Research the specific skills covered by each certification.
Enroll in an official SAS training program: SAS Institute offers a variety of training options, including online courses, classroom training, and e-learning. These programs provide comprehensive coverage of the exam objectives.
Study using SAS e-learning courses & practice tests: Utilize official SAS study materials, including practice tests and exam guides. Focus on areas where you need improvement.
Practice with SAS software: Hands-on experience is extremely important.
Join SAS communities and forums: Interact with other SAS learners and professionals. This can provide valuable insights and support.
Take and pass the exam to receive your certification: Arrive prepared and manage your time effectively during the exam.
Maintain your certification: Some SAS certifications require periodic renewal to stay current with the latest software updates and industry standards.

#ResearchDataExcellence #DataAnalysisAwards #InternationalDataAwards #ResearchDataAwards #DataExcellence #ResearchData #DataAnalysis #DataAwards #GlobalDataExcellence #DataInnovationAwards #DataResearch #ExcellenceInData #DataAwardWinners#DataAnalysisExcellence #ResearchDataInsights #GlobalResearchAwards #DataExcellenceAwards #ExcellenceInResearchData #ResearchDataLeadership #DataResearchExcellence #AwardWinningData #InternationalResearchAwards #DataAnalysisInnovation #ResearchDataAchievement #ExcellenceInDataAnalysis #GlobalDataInsights #ResearchDataSuccess #DataAwards2024

Website: International Research Data Analysis Excellence Awards

Visit Our Website : researchdataanalysis.com
Nomination Link : researchdataanalysis.com/award-nomination
Registration Link : researchdataanalysis.com/award-registration
member link : researchdataanalysis.com/conference-abstract-submission
Awards-Winners : researchdataanalysis.com/awards-winners
Contact us : contact@researchdataanalysis.com

Get Connected Here:
==================
Facebook : www.facebook.com/profile.php?id=61550609841317
Twitter : twitter.com/Dataanalys57236
Pinterest : in.pinterest.com/dataanalysisconference
Blog : dataanalysisconference.blogspot.com
Instagram : www.instagram.com/eleen_marissa

12 April, 2025

Biomarkers of food intake: The key to precision nutrition !



 

Study: Towards nutrition with precision: unlocking biomarkers as dietary assessment tools. Image Credit: Gorodenkoff / Shutterstock.com
What are BFIs?

BFIs are often used to evaluate dietary adherence in nutritional intervention and meal studies, assess the extent of misreporting, as well as validate epidemiologically-derived associations between food and disease risk. While food frequency questionnaires (FFQs) and dietary recalls are also useful assessment tools, their subjective nature can lead to biased reporting and poor compliance.

A BFI is a metabolite of ingested food and is defined as a measure of the consumption of specific food groups, foods, or food components. BFIs can be ranked based on their robustness, in which minimal interference from a varied dietary background affects the use of the BFI in research.

Reliability in BFIs implies that this marker is in qualitative and/or quantitative agreement with other biomarkers or dietary instruments. Plausibility depends on the specificity and chemical relationship of the metabolite to the nutrient in question, which limits the risk of misclassification due to other factors.

Biologic variability for BFIs depends on absorption, distribution, metabolism and elimination (ADME) of the food, as well as enzyme/transporter concentrations, genetic variation, and gut microbial metabolism. Importantly, this characteristic has not been reported for most BFIs.

Intra-class correlation (ICC) also reflects variability within a population or group in response to different factors. When ICC is low, the BFI may be associated with wrong sampling time, low frequency of consumption, or gross variation in the response over time within and between individuals and populations.
About the study

Following validated BFI reviews that met appropriate guidelines and methodologies, the researchers performed two systematic searches for experimental and observational studies. Thereafter, a four-level classification system was used to rank reported BFIs based on their robustness, reliability, and plausibility.

If all criteria were met, the BFI was classified as belonging to utility level one. At level two, the candidate BFI is plausible and robust but not known to be reliable. Level tjree BFIs are plausible but lack robustness and reliability, whereas level four BFIs have not been reported for the foods.

If these criteria are met, additional characteristics including time kinetics, which refers to the sampling window or time period for the BFI to be sampled after nutrient ingestion, analytical performance, and reproducibility are also assessed.
Level one and two BFIs

Utility level one or validated urine BFIs were found for total meat, total fish, chicken, fatty fish, total fruit, citrus fruit, banana, whole-grain wheat or rye, alcohol, beer, wine, and coffee. Level one blood BFIs exist for fatty fish, whole grain wheat and rye, citrus, and alcohol.
Your full guide to the gut microbiome eBook Download your free guide to the gut microbiome.

Level two candidate BFIs in urine include total plant foods and various plant foods including legumes and vegetables, dairy, and some specific fruits and vegetables. Blood BFIs at level two exist for plant foods, dairy products, some meat, and some non-alcoholic drinks; however, these BFIs comprise fewer foods with less validation.
Identification and validation of BFIs

The discovery and validation of BFIs requires discovery studies, followed by confirmation and prediction studies. Meal studies identify plausible BFIs; however, these may not be specific, unless other foods contain very low levels of the marker or are rarely consumed.

For example, betaine is present at high levels in oranges and is used to detect orange or citrus consumption, despite being found in many other foods at low levels. However, discovery studies may be very small or poorly representative.

Observational studies can be used to identify associations between blood or urine metabolites and diet but are subject to confounding by lifestyle factors. When two types of foods are frequently consumed together, like fish and green tea in Japan, confounding occurs with the BFI of fish, as trimethylamine oxide (TMAO) can also be associated with green tea, thus making these foods not suitable for BFI discovery.

Endogenous metabolites are poorly robust BFIs, as they are produced both endogenously and from exogenous foods. These metabolites are also associated with significant variations with inter-individual genetic and microbial differences.

Prediction studies use models based on randomized controlled trials to identify the consumption of a given food. This approach outperforms correlation studies by identifying BFIs that may predict intake, but depends on the sampling window for accuracy.

Several databases, such as Massbank, METLIN Gen2, mzCloud (Thermo Scientific), mzCloud Advanced, Mass Spectral Database, and HMDB, are available for metabolite search. The Global Natural Products Social Molecular Networking initiative is leading efforts to interconnect these databases and compare unknown compounds against known spectra, such as by the Global Natural Products Social Mass Spectrometry Search Tool (MASST).
BFI applications

BFI selection depends on the aim of the study. Qualitative BFIs are adequate for identifying non-compliance or conducting per-protocol analyses. Conversely, a combination of signature BFIs provides greater specificity and may even identify a whole meal or dietary pattern.

A stepwise approach could help identify actual consumers of a food of interest before assessing the amount consumed in a second step, allowing even less robust BFIs to play a role in these types of studies.

Habitual dietary patterns can be captured by multiple sampling, with the frequency and number dependent on the sampling window and frequency of consumption. Optimal sampling methods identified in the current study include spot urine samples such as first morning void or overnight cumulative samples, dried urine sports, vacuum tube stored samples, dried spot samples, and microsampling.

Remote sampling increases the number of possible participants and ability to monitor dietary patterns and changes over time. These methods can also improve epidemiological studies aiming to identify correlations between diet and disease risk.

Refining sampling and analytic methods may also improve the precision of nutrition research and establish trusted associations between dietary intakes and health consequences.
Future development

Future studies are needed to validate the development of single and multimarker BFI using different samples, food groups, and diets, as well as cooked and processed foods. Quantitative BFIs should also be characterized by dose-response studies, whereas BFI combinations should be established to predict and classify intake and dietary patterns.

#ResearchDataExcellence #DataAnalysisAwards #InternationalDataAwards #ResearchDataAwards #DataExcellence #ResearchData #DataAnalysis #DataAwards #GlobalDataExcellence #DataInnovationAwards #DataResearch #ExcellenceInData #DataAwardWinners#DataAnalysisExcellence #ResearchDataInsights #GlobalResearchAwards #DataExcellenceAwards #ExcellenceInResearchData #ResearchDataLeadership #DataResearchExcellence #AwardWinningData #InternationalResearchAwards #DataAnalysisInnovation #ResearchDataAchievement #ExcellenceInDataAnalysis #GlobalDataInsights #ResearchDataSuccess #DataAwards2024

Website: International Research Data Analysis Excellence Awards

Visit Our Website : researchdataanalysis.com
Nomination Link : researchdataanalysis.com/award-nomination
Registration Link : researchdataanalysis.com/award-registration
member link : researchdataanalysis.com/conference-abstract-submission
Awards-Winners : researchdataanalysis.com/awards-winners
Contact us : contact@researchdataanalysis.com

Get Connected Here:
==================
Facebook : www.facebook.com/profile.php?id=61550609841317
Twitter : twitter.com/Dataanalys57236
Pinterest : in.pinterest.com/dataanalysisconference
Blog : dataanalysisconference.blogspot.com
Instagram : www.instagram.com/eleen_marissa

08 April, 2025

R Programming with AI and Machine Learning: What You Need to Know !

 Main image of article R Programming with AI and Machine Learning: What You Need to Know


R is a programming language that specializes in data analysis, statistics, and data visualization. R and Python are the two preferred languages for data analysts and data scientists. In addition to being great for data, R also has a rich set of tools for artificial intelligence.

Let’s explore the tools that are available, and look at how you can add AI to your R programming skills.
Start by Learning AI Concepts

Before you start using the AI packages and tools available for R, you’ll need to spend some time learning AI concepts. With R, the most important concepts are:Machine Learning (ML)
Deep Learning
Natural Language Processing
Computer Vision

Also, we want to draw attention to one type of ML you’ll definitely need to learn: gradient boosting. This is an advanced concept that involves training models in sequence, with each training becoming subsequently better than the previous. It’s used a lot in AI with R.

Pro tip: ChatGPT is also an excellent place to learn about such topics. It knows a great deal about its own technologies. For example, you can ask it, “Please provide me with a basic introduction to machine learning concepts,” then you can continue asking it for more advanced concepts are you go. Of course, we all know that ChatGPT has been known to make mistakes; it’s always helpful to double-check its output against your own web findings.
Interactive AI with R

R is an excellent tool for working interactively with data, running analyses, and visualizing data. If you also want to work interactively with AI, you’ll need to learn some basic packages first. (Note that while other languages use terms like libraries, R uses the term packages.)Data Cleaning Packages: When you’re working with data (whether with AI or not), your data needs to be cleansed. Data from random sources and multiple pipelines can have problems that will skew your analysis and anything you build from it (one common problem is empty values, which would likely be interpreted as zeros). A good package to learn here would be dplyr. (Note that dplyr is more than just a data cleanser, as it’s also for filtering, selecting, arranging, mutating, and summarizing data.)
Data Visualization Packages: Because R is so popular for data visualization, there are many options here. Some of the most popular areggplot
Lattice
plotly and leaflet for web graphics

After learning these, you’re ready to move on to machine learning. Here are some libraries to learn:Caret: This is probably the most popular ML library for R. It’s for building predictive models. It’s easy to use and has a complete set of features. (Note that the page we’re linking to includes some great books and other resources for learning about Caret and similar libraries.)
Mlr3: This is a newer machine language library. It’s object oriented and is quite easy to use. The creators even wrote a nice online book about it.
Gradient Boosting: As mentioned, this is an important part of AI in R. There’s a library built specifically for it called XGBoost.

Deep learning is a specialized type of machine learning. The three most important libraries for deep learning are:Keras: This is probably the first of the three to learn, as it actually provides a simplified way to work with the next on the list, TensorFlow. Keras has grown in popularity, and many people prefer it over both TensorFlor and Torch.
TensorFlow: This is a machine language and AI framework created by Google. The R documentation we’ve linked to is excellent with a full installation guide and plenty of examples.
Torch: Torch is an older library for machine learning. It was originally created in 2002, and the developers have continued to expand it. Today it conforms with the latest AI tools. It’s mostly used in research and educational settings.

Each of these are accessible from many different languages, including R (the links we’ve provided for each are for the R packages). We recommend that you start with Keras and then learn TensorFlow; if you have colleagues who use Torch, you might then learn that one as well.

Natural language processing (NLP) refers to the process of reading and producing language that sounds like it came from a real person. If you’ve used ChatGPT’s various features and noticed how it sounds (mostly) human, you’ve witnessed NLP. There are several packages you can learn for NLP. Here are some important ones:tm (lowercase, stands for text mining): This is a package for processing text documents in various formats, including plain text and PDF. Although not technically an AI package, it’s useful for preprocessing documents such as removing extra whitespace before sending it into a NLP package. It can also do something called term-document matrix creation–which is a fancy way of saying it can build a table of different words used across multiple documents.
Quanteda: This is a popular and somewhat complex NLP package. It includes core NLP features, as well as preprocessing features like those found in tm. It even includes visualization capabilities to plot information about the text.
NLP (yes, it stands for Natural Language Processing) is a rather complete NLP processing package. It provides tokenizing, language processing, annotation, and more. The documentation, which includes code examples, is available as a PDF download.

There are also more libraries related to NLP, including some for sentiment analysis and part of speech tagging. Geeks for Geeks has a pretty good page about it.

Computer vision is another area that’s important to AI and accessible from within R. In addition to torch and keras mentioned earlier, an important library for computer vision is called OpenCV. It’s written in C++, but you can access it from R through the opencv package.

Finally, we want to draw attention to a special package that lets you use Python libraries within an R app. It’s called reticulate. This opens up an entire world of AI tools that are normally exclusive to Python.
Production AI Apps Written in R

R is a bit unique compared to other languages in that it’s typically used interactively, most often inside the app called R Studio. However, you can also use it to build production apps. If you’re interested in releasing AI apps, you’ll want to learn about Shiny.

Shiny is a package that simplifies the process of building a web app within R. This means if you want to build a website, you can use R for the backend, rather than the more popular languages such as JavaScript, Java, and C#. This could be a huge benefit, because it means you can access all these R packages (not just the AI ones) from your backend that you’re accustomed to using if you’re an R programmer.

One cool aspect of Shiny is that it also includes a front end that you can use for building interactive data-oriented dashboards. That means you don’t have to learn a separate front-end framework.

Conclusion

R provides a rich environment for building data and statistic applications, in addition to AI applications. You’ll likely want to use R Studio, which is the de facto standard IDE.

Learning AI takes time, so don’t rush it. Spend time working through what we’ve covered here; there’s enough to cover several months of studying. Practice as you go, and soon you’ll be an AI expert with R, and will be in good shape to land a great job.

#ResearchDataExcellence #DataAnalysisAwards #InternationalDataAwards #ResearchDataAwards #DataExcellence #ResearchData #DataAnalysis #DataAwards #GlobalDataExcellence #DataInnovationAwards #DataResearch #ExcellenceInData #DataAwardWinners#DataAnalysisExcellence #ResearchDataInsights #GlobalResearchAwards #DataExcellenceAwards #ExcellenceInResearchData #ResearchDataLeadership #DataResearchExcellence #AwardWinningData #InternationalResearchAwards #DataAnalysisInnovation #ResearchDataAchievement #ExcellenceInDataAnalysis #GlobalDataInsights #ResearchDataSuccess #DataAwards2024

Website: International Research Data Analysis Excellence Awards

Visit Our Website : researchdataanalysis.com
Nomination Link : researchdataanalysis.com/award-nomination
Registration Link : researchdataanalysis.com/award-registration
member link : researchdataanalysis.com/conference-abstract-submission
Awards-Winners : researchdataanalysis.com/awards-winners
Contact us : contact@researchdataanalysis.com

Get Connected Here:
==================
Facebook : www.facebook.com/profile.php?id=61550609841317
Twitter : twitter.com/Dataanalys57236
Pinterest : in.pinterest.com/dataanalysisconference
Blog : dataanalysisconference.blogspot.com
Instagram : www.instagram.com/eleen_marissa

07 April, 2025

French antitrust watchdog fines Apple €150 million over data collection tool !







Without directly condemning Apple’s data collection tool, the antitrust regulator has determined that the conditions surrounding its implementation amounted to an abuse of a dominant market position. The decision comes at a time of tension between the US and the EU over the treatment of Big Tech.


France's national competition regulator imposed a €150 million fine on Apple on Monday, citing the company’s abuse of its dominant position in the distribution of mobile applications on iOS and iPadOS.

At the core of the French competition authority’s decision is Apple’s data collection system, which regulators say goes beyond what is necessary. The Autorité de la concurrence condemned the company’s approach as “neither necessary for nor proportionate with Apple’s stated objective of protecting personal data.”

In 2021, Apple introduced App Tracking Transparency (ATT), a tool designed to give users more control over their personal data. The feature prompts users to consent to data collection on third-party applications within the iOS and iPadOS ecosystem, limiting targeted advertising unless explicitly allowed.

While Apple has promoted ATT as a major step toward protecting user privacy, regulators in France argue that the system may also serve to reinforce the company’s dominance by restricting competitors' access to valuable data.


In Monday’s decision, the French watchdog did not question the ATT itself but found its implementation methods “artificially complicate the use of third-party applications and distort the neutrality of the framework to the detriment of small publishers financed by advertising.”

According to the French regulator “multiple consent pop-ups are displayed, making the use of third-party applications in the iOS environment excessively complex.” It added that “while advertising tracking only needs to be refused once, the user must always confirm their consent a second time.”

The result was an asymmetric system, the antitrust watchdog said, whereby publishers were required to obtain double consent from users for tracking on third-party sites and applications, while Apple did not ask for consent from users of its own applications.


Apple reacted in a statement Monday claiming that ATT “gives users more control of their privacy through a required, clear, and easy-to-understand prompt about one thing: tracking.” It added “that prompt is consistent for all developers, including Apple, and we have received strong support for this feature from consumers, privacy advocates, and data protection authorities around the world."

The EU is expected to close two investigations into Apple under its Digital Markets Act in the coming days. One targets the rules of the App store and whether they prevent app developers from informing users about offers outside its App Store free of charge; another concerns Apple’s browser options on iPhones.

#ResearchDataExcellence #DataAnalysisAwards #InternationalDataAwards #ResearchDataAwards #DataExcellence #ResearchData #DataAnalysis #DataAwards #GlobalDataExcellence #DataInnovationAwards #DataResearch #ExcellenceInData #DataAwardWinners#DataAnalysisExcellence #ResearchDataInsights #GlobalResearchAwards #DataExcellenceAwards #ExcellenceInResearchData #ResearchDataLeadership #DataResearchExcellence #AwardWinningData #InternationalResearchAwards #DataAnalysisInnovation #ResearchDataAchievement #ExcellenceInDataAnalysis #GlobalDataInsights #ResearchDataSuccess #DataAwards2024

Website: International Research Data Analysis Excellence Awards

Visit Our Website : researchdataanalysis.com
Nomination Link : researchdataanalysis.com/award-nomination
Registration Link : researchdataanalysis.com/award-registration
member link : researchdataanalysis.com/conference-abstract-submission
Awards-Winners : researchdataanalysis.com/awards-winners
Contact us : contact@researchdataanalysis.com

Get Connected Here:
==================
Facebook : www.facebook.com/profile.php?id=61550609841317
Twitter : twitter.com/Dataanalys57236
Pinterest : in.pinterest.com/dataanalysisconference
Blog : dataanalysisconference.blogspot.com
Instagram : www.instagram.com/eleen_marissa

Possible National Security Crisis Averted: CISA’s Reversal Extends Support for CVE Database !

The nonprofit organization MITRE, which maintains the Common Vulnerabilities and Exposures (CVE) database, said on April 15 that the US gove...