The XRP Transaction Risk AI platform provides intelligent risk assessment by analyzing business information associated with XRP wallet addresses and evaluating potential regulatory compliance issues.
The system starts by retrieving wallet information from the XRPScan API:
def get_xrp_info(address): url = f"https://api.xrpscan.com/api/v1/account/{address}" response = requests.get(url) if response.status_code == 200: account_info = response.json() if 'accountName' not in account_info or account_info['accountName'] is None: return None, None, None, None, None domain = account_info['accountName'].get('domain', None) verified = account_info.get('accountName', {}).get('verified', False) twitter = account_info['accountName'].get('twitter', None) balance = account_info.get('xrpBalance', None) initial_balance = account_info.get('initial_balance', None) return verified, domain, twitter, balance, initial_balance
Retrieved Information:
Domain name (for business lookup)
Verification status
Social media presence (Twitter)
Current XRP balance
Initial account balance
Once the domain is identified, the system crawls the associated website:
# Initialize the web crawlerweb_crawler = CrawlUtil( client=client, vector_storage_id=vector_storage_id, progress_text=progress_text)# Crawl the business websiteweb_crawler.website_crawler(f"https://{domain}", my_bar=my_bar)company_name = web_crawler.extract_company_from_url(f"https://{domain}")
The crawler:
Fetches HTML content from all linked pages
Extracts text and structure
Uploads data to OpenAI vector storage
Caches results in Redis for performance
Three specialized AI assistants analyze the collected data:
assistants_ = { 'resource': resource_assistant_id, 'report': report_assistant_id, 'summary': summary_assistant_id,}prompts_ = { 'resource': f"List the relevant financial regulatory documents for the company: {company_name}", 'report': f"Identify any financial compliance red flags in the company data: {company_name} that might affect their business compliance.", 'summary': f"Provide a brief summary of the financial regulations relevant to the company: {company_name}",}def run_assistant(prompt, assistant_id): thread = client.beta.threads.create( messages=[{"role": "user", "content": [{"type": "text", "text": prompt}]}] ) run = client.beta.threads.runs.create( thread_id=thread.id, assistant_id=assistant_id, stream=True ) result_text = "" for event in run: if isinstance(event, ThreadMessageCompleted): result_text = event.data.content[0].text.value if isinstance(event, ThreadRunFailed): break return result_text
Crawled content is uploaded to OpenAI’s vector storage for semantic search:
def website_crawler(self, url, my_bar): base_url = url data = self.get_website_data(base_url, my_bar) # Check cache first if file_id := self.r.get(url): self.r.zadd("vs_files", {file_id: int(time.time())}) return # Upload to vector database file_name = urlparse(base_url).netloc + ".txt" os.makedirs('data', exist_ok=True) with open('data/' + file_name, "w") as text_file: text_file.write(data) file_ = self.client.files.create( file=open('data/' + file_name, "rb"), purpose="assistants" ) # Store in vector database vector_store_file = self.client.beta.vector_stores.files.create( vector_store_id=self.vector_storage_id, file_id=file_.id )
The system uses Redis caching to avoid re-crawling websites that have been recently analyzed, significantly improving performance for repeated queries.
The risk assessment includes robust error handling:
Common Error Scenarios
if submitted: verified, domain, twitter, balance, initial_balance = get_xrp_info(wallet_address) # No account information found if not domain: st.error("No info") return # Insufficient data for analysis if not twitter or not balance or not initial_balance: st.error("There is no sufficient information available for this address.") return st.success('Account information retrieved successfully!')