Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hi! I am getting BulkBatchFailed exception while using pk_chunking. I am using get_batch_list() for getting the batches from job_id and for each batch i am doing is_batch_done(), which is raising the exception. #75

Open
kunal4422 opened this issue Dec 26, 2019 · 5 comments

Comments

@kunal4422
Copy link

No description provided.

@kunal4422 kunal4422 changed the title Hi! I am getting BulkBatchFailed exception while using pk_chunking. Hi! I am getting BulkBatchFailed exception while using pk_chunking. I am using get_batch_list() for getting the batches from job_id and for each batch i am doing is_batch_done(), which is raising the exception. Dec 26, 2019
@kunal4422
Copy link
Author

Here is the associated code :

job_id = bulk.create_queryall_job("Contact", contentType='CSV',pk_chunking=100000,concurrency='Serial') batch_id = bulk.query(job_id, "select twitter_profile__c, userid__c from contact") batch_list=bulk.get_batch_list(job_id) for item in batch_list:batch_id=item['id'],job_id=item['jobId'] while not bulk.is_batch_done(batch_id=batch_id,job_id=job_id): #print(item) time.sleep(10)
Output : `
[{'id': '75118000004I1CtAAK', 'jobId': '75018000004sUucAAE', 'state': 'NotProcessed', 'createdDate': '2019-12-26T08:47:36.000Z', 'systemModstamp': '2019-12-26T08:47:37.000Z', 'numberRecordsProcessed': '0', 'numberRecordsFailed': '0', 'totalProcessingTime': '0', 'apiActiveProcessingTime': '0', 'apexProcessingTime': '0'}, {'id': '75118000004I1CyAAK', 'jobId': '75018000004sUucAAE', 'state': 'Queued', 'createdDate': '2019-12-26T08:47:37.000Z', 'systemModstamp': '2019-12-26T08:47:37.000Z', 'numberRecordsProcessed': '0', 'numberRecordsFailed': '0', 'totalProcessingTime': '0', 'apiActiveProcessingTime': '0', 'apexProcessingTime': '0'}, {'id': '75118000004I1D3AAK', 'jobId': '75018000004sUucAAE', 'state': 'Queued', 'createdDate': '2019-12-26T08:47:37.000Z', 'systemModstamp': '2019-12-26T08:47:37.000Z', 'numberRecordsProcessed': '0', 'numberRecordsFailed': '0', 'totalProcessingTime': '0', 'apiActiveProcessingTime': '0', 'apexProcessingTime': '0'}, {'id': '75118000004I1D8AAK', 'jobId': '75018000004sUucAAE', 'state': 'InProgress', 'createdDate': '2019-12-26T08:47:37.000Z', 'systemModstamp': '2019-12-26T08:47:37.000Z', 'numberRecordsProcessed': '0', 'numberRecordsFailed': '0', 'totalProcessingTime': '0', 'apiActiveProcessingTime': '0', 'apexProcessingTime': '0'}, {'id': '75118000004I1D9AAK', 'jobId': '75018000004sUucAAE', 'state': 'Queued', 'createdDate': '2019-12-26T08:47:37.000Z', 'systemModstamp': '2019-12-26T08:47:37.000Z', 'numberRecordsProcessed': '0', 'numberRecordsFailed': '0', 'totalProcessingTime': '0', 'apiActiveProcessingTime': '0', 'apexProcessingTime': '0'}, {'id': '75118000004I1DDAA0', 'jobId': '75018000004sUucAAE', 'state': 'Queued', 'createdDate': '2019-12-26T08:47:37.000Z', 'systemModstamp': '2019-12-26T08:47:37.000Z', 'numberRecordsProcessed': '0', 'numberRecordsFailed': '0', 'totalProcessingTime': '0', 'apiActiveProcessingTime': '0', 'apexProcessingTime': '0'}, {'id': '75118000004I1DIAA0', 'jobId': '75018000004sUucAAE', 'state': 'Queued', 'createdDate': '2019-12-26T08:47:37.000Z', 'systemModstamp': '2019-12-26T08:47:37.000Z', 'numberRecordsProcessed': '0', 'numberRecordsFailed': '0', 'totalProcessingTime': '0', 'apiActiveProcessingTime': '0', 'apexProcessingTime': '0'}]
75118000004I1CtAAK
75018000004sUucAAE


BulkBatchFailed Traceback (most recent call last)
in
17 for item in batch_list:
18
---> 19 while not bulk.is_batch_done(batch_id=item['id'],job_id=item['jobId']):
20 #print(item)
21 time.sleep(10)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\salesforce_bulk\salesforce_bulk.py in is_batch_done(self, batch_id, job_id)
426 if batch_state in bulk_states.ERROR_STATES:
427 status = self.batch_status(batch_id, job_id)
--> 428 raise BulkBatchFailed(job_id, batch_id, status.get('stateMessage'), batch_state)
429 return batch_state == bulk_states.COMPLETED
430

BulkBatchFailed: Batch 75118000004I1CtAAK of job 75018000004sUucAAE failed: None
`

@roshin8
Copy link

roshin8 commented Feb 27, 2020

I added an example to the README which shows how to use PK Chunking. Let me know if that helps you.
https://github.com/heroku/salesforce-bulk/pull/77/files

@hdao1121
Copy link

I actually think this is a bug. is_batch_done throws an error when the return status is NOT_PROCESSED. But according to salesforce doc, NOT_PROCESSED is expected while chunking.

@CR-Lough
Copy link

hdao1121, do you know if this ever got resolved?

@lambacck
Copy link
Contributor

If you are at the point of using pk_chunking, you are better off not using is_batch_done and instead getting the batch status list to determine completion. The example in the linked PR is on path do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants