Jeff - Hi, any suggestion on how can we extract...
Hi, any suggestion on how can we extract all the user list in our cluster without any interruption?. Currently, the user api is timing out due to the amount of data in the get request.
4 Replies
Maybe run a script specifying the record offset and record size of low number?
Can you share a sample script for this? Appreciate if you can share. Thank you
From chat-gpt this should probably work :
import requests
def fetch_users(base_url, auth_token, record_size=20):
headers = {
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {auth_token}"
}
record_offset = 0 while True: payload = { "record_offset": record_offset, "record_size": record_size, "include_favorite_metadata": False }
response = requests.post(f"{base_url}/api/rest/2.0/users/search", json=payload, headers=headers)
if response.status_code != 200: print(f"Error: {response.status_code}, {response.text}") break
data = response.json() users = data.get("users", []) # Adjust based on actual API response structure
if not users: break # Stop if no more records
for user in users: print(user) # Process user data as needed
record_offset += record_size Example usage BASE_URL = "BASE_URL" AUTH_TOKEN = "your_auth_token_here" # Replace with your actual token fetch_users(BASE_URL, AUTH_TOKEN)
record_offset = 0 while True: payload = { "record_offset": record_offset, "record_size": record_size, "include_favorite_metadata": False }
response = requests.post(f"{base_url}/api/rest/2.0/users/search", json=payload, headers=headers)
if response.status_code != 200: print(f"Error: {response.status_code}, {response.text}") break
data = response.json() users = data.get("users", []) # Adjust based on actual API response structure
if not users: break # Stop if no more records
for user in users: print(user) # Process user data as needed
record_offset += record_size Example usage BASE_URL = "BASE_URL" AUTH_TOKEN = "your_auth_token_here" # Replace with your actual token fetch_users(BASE_URL, AUTH_TOKEN)
Thanks for sharing this sample